Category Archives: science

Art and science (IX) – Neural networks

This is a continuation of a series of blog posts, written mostly in French, about arts and science

In the past few years, we’ve seen the emergence of Deep Neural Networks (DNN), and the latest developments are Generative Adverserial Networks (GAN), where the goal is to pit two neural networks against each other so that they find the best way to generate an object from a label or a simple drawing, or mimick the style of an artist.

The first ripple in the vast ocean of possibility was Deep Dream, though it wasn’t technically a GAN:

Now, things have evolved even more, and you can not only generate trippy videos, but also use neural network to emulate the style of an artist and generate from scratch content that is indeed appealing!

Continue reading

SPIE DCS 2018: CCSI – Computational Imaging

This year I’m chairing the Computational Imaging session at the SPIE Defense + Commercial Sensing, in Orlando, Fla., April 16-19, 2018, together with Aamod Shanker. We have invited a lot of amazing speakers and we are organizing a panel discussion on the trends in computational imaging.

Here’s the program:

SESSION 6 TUE APRIL 17, 2018 – 11:10 AM TO 12:00 PM
Computational Imaging I
[10656-22] “Ultra-miniature…”David G. Stork, Rambus Inc. (USA)
[10656-36] “Computed axial lithography: volumetric 3D printing of arbitrary geometries” Indrasen Bhattacharya
Lunch/Exhibition Break Tue 12:00 pm to 1:50 pm

SESSION 7 TUE APRIL 17, 2018 – 1:50 PM TO 3:30 PM
Computational Imaging II
[10656-24] “Terahertz radar for imaging…”Goutam Chattopadhyay
[10656-23] “Computational imaging…” Lei Tian
[10656-26] “Achieving fast high-resolution 3D imaging” Dilworth Y. Parkinson
[10656-27] “Linear scattering theory in phase space” Aamod Shanker

PANEL DISCUSSION TUE APRIL 17, 2018 -4:00 PM TO 6:00 PM

TUESDAY POSTER SESSION TUE 6:00 PM TO 8:00 PM

SESSION 8 WED APRIL 18, 2018 – 8:00 AM TO 10:05 AM
Computational Imaging III
[10656-28] “High resolution 3D imaging…” Michal Odstrcil
[10656-29] “A gigapixel camera array…” Roarke Horstmeyer
[10656-30] “EUV photolithography mask inspection using Fourier ptychography” Antoine Wojdyla,
[10656-31] “New systems for computational x-ray phase imaging…” Jonathan C. Petruccelli,
[10656-68] “Low dose x-ray imaging by photon counting detector”, Toru Aoki

Continue reading

Oasys

With increasingly tight beamline specifications, optical modeling software becomes necessary in order to design and predict the performances of conceptual beamlines. This becomes particularly true with the advent of highly coherent light sources (such the proposed upgrade of the ALS), where additional considerations such mirror deformation under heat load and effects of partial coherence needs to be studied. Luca Rebuffi will present the latest features of OASYS/Shadow, an optical beamline modeling tool widely used in the synchrotron community and show how to get started with beamline simulations.

https://github.com/awojdyla/ALS-U_Examples

Program: Continue reading

Self-reference

Self-reference is cornerstone in Hofstadter’s Godel-Escher-Bach, a must read book for anyone interested in logic (and we shall rely logic in these days to stay sane.)

Here’s a bunch of examples of self-reference that I found interesting, curated just for you!

Barber’s paradox:

The barber is the “one who shaves all those, and those only, who do not shave themselves.” The question is, does the barber shave himself?

Self-referential figure (via xkcd):

Tupper’s formula that prints itself on a screen (via Brett Richardson) Continue reading

Moore’s wall

A single chip such has Intel Xeon Phi has a computational power in excess of 1TFLOPS and features more than a hundred billion transistors. Few people  outside the world of semi-conductor engineering appreciate this, but that is a fantastical number: 100,000,000,000. If every transistor was a pixel, you would need a wall 0f 100 x 100 4K TV screen to display them all!

Over the past fifty years, the semiconductor industry has achieved incredible things, in part thanks to planar technology, which allowed to exponentially scale the manufacturing process, following Moore’s law. But it seems that we’re about to hit a wall soon.

faith_no_moore

Let’s give an overview of where we stand, and where do we go from here!

Continue reading

Quick’n’dirty

Over the years I’ve collected quotes from people who are.

I always like quotes, because they are atoms of knowledge, quick and dirty ways to understand the world we only have one life to explore. To some extent, they axioms of life in that they are true and never require an explanation (otherwise they wouldn’t be quotations.)

Here’s a bunch of quotes that I found particularly interesting, starting with my absolute favorite quote comes from the great Paul Valery:

The folly of mistaking a paradox for a discovery, a metaphor for a proof, a torrent of verbiage for a spring of capital truths, and oneself for an oracle, is inborn in us. – Paul Valery

On research — trial and error

Basic research is like shooting an arrow into the air and, where it lands, painting a target.
-Homer Burton Adkins

A thinker sees his own actions as experiments and questions–as attempts to find out something. Success and failure are for him answers above all.
– Friedrich Nietzsche
Continue reading

Learning Deep

In the past four years, there’s been a lot of progress in the field of machine learning, and here’s a story seen from the outskirts.

Eight years ago, for a mock start-up project, we tried to do some basic headtracking. At that time, my professor Stéphane Mallat told us that the most efficient way to do this was the Viola-Jones algorithm, which was still based on hard-coded features (integral images and Haar features) and a hard classifier (adaboost.)

(I was thrilled when a few years later Amazon Firephone was embedding similar features; unfortunately, this was a complete bomb — better technologies now exist and will make a splash pretty soon.)

By then, the most advanced book on machine learning was “Information Theory, Inference, and Learning” by David McKay, a terrific book to read, and also “Pattern Recognition and Machine Learning” by Chris Bishop (which I never read past chapter 3, lack of time.)

Oh boy, how things have changed!

Continue reading

Julia Language

A few years ago, I got interested in the then-nascent Julia language (julialang.org), a new open source language based on Matlab syntax with C-like performances, thanks to its just-in-time compiler.

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 225x speedup

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 200x speedup (from https://arxiv.org/pdf/1611.03404.pdf)

It now seems that the language is gaining traction, with many available packages, lots of REPL integration (it works with Atom+Hydrogen, and I suspect Jupyter gets its first initial from Julia and Python) and delivering on performances.

Julia is now used on supercomputers, such as Berkeley Lab’s NERSC, taught at MIT (by no less than Steven G Johnson, the guy who brought us FFTW and MEEP!), and I’ve noticed that some of the researchers from Harvard’s RoLi Lab I’ve invited to SPIE DCS 2018 are sharing their Julia code from their paper “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish“. Pretty cool!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

I got a chance to attend parts of Julia Con 2017 in Berkeley. I was amazed by how dynamic was the the community, in part supported by Moore’s foundation (Carly Strasser, now head of Coko Foundation), and happy to see Chris Holdgraf (my former editor at the Science Review) thriving at the Berkeley Institute for Data Science (BIDS).

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

I started sharing some code for basic image processing (JLo) on Github. Tell me what you think!

(by the way, I finally shared my meep scripts on github, and it’s here!)

Sexism in academia

This year, the recipients of the Nobel Prize were 100% men. That’s at the same sad and scary; sad because, and scary because it seems that things are not changing at the pace they should.movie_shade

Continue reading

ALS-U

Some may have been wondering what I have been up to lately!
At the beginning of the year, I started working on the ALS-U project, which is the  upgrade of the Advanced Light Source, the main synchrotron at Lawrence Berkeley National Laboratory. The goal is to improve the facility with a Diffraction-Limited Storage Ring (DLSR), in order to increase the brilliance of the beam, so as allow scientists from all over the world to perform the most precise experiments, allowing bright and full coherent beams with diameters as small as 10 nanometer, or twice the width of a strand of DNA. (here’s a report on all the niceties you can do with such a tool: ALS-U: Solving Scientific Challenges with Coherent Soft X-Rays)

ALS-U logo Continue reading