Category Archives: projects

SPIE DCS 2018: CCSI – Computational Imaging

This year I’m chairing the Computational Imaging session at the SPIE Defense + Commercial Sensing, in Orlando, Fla., April 16-19, 2018, together with Aamod Shanker. We have invited a lot of amazing speakers and we are organizing a panel discussion on the trends in computational imaging.

Here’s the program:

SESSION 6 TUE APRIL 17, 2018 – 11:10 AM TO 12:00 PM
Computational Imaging I
[10656-22] “Ultra-miniature…”David G. Stork, Rambus Inc. (USA)
[10656-36] “Computed axial lithography: volumetric 3D printing of arbitrary geometries” Indrasen Bhattacharya
Lunch/Exhibition Break Tue 12:00 pm to 1:50 pm

SESSION 7 TUE APRIL 17, 2018 – 1:50 PM TO 3:30 PM
Computational Imaging II
[10656-24] “Terahertz radar for imaging…”Goutam Chattopadhyay
[10656-23] “Computational imaging…” Lei Tian
[10656-26] “Achieving fast high-resolution 3D imaging” Dilworth Y. Parkinson
[10656-27] “Linear scattering theory in phase space” Aamod Shanker



SESSION 8 WED APRIL 18, 2018 – 8:00 AM TO 10:05 AM
Computational Imaging III
[10656-28] “High resolution 3D imaging…” Michal Odstrcil
[10656-29] “A gigapixel camera array…” Roarke Horstmeyer
[10656-30] “EUV photolithography mask inspection using Fourier ptychography” Antoine Wojdyla,
[10656-31] “New systems for computational x-ray phase imaging…” Jonathan C. Petruccelli,
[10656-68] “Low dose x-ray imaging by photon counting detector”, Toru Aoki

Continue reading

Art and science (IX) – Neural networks

This is a continuation of a series of blog posts, written mostly in French, about arts and science

In the past few years, we’ve seen the emergence of Deep Neural Networks (DNN), and the latest developments are Generative Adverserial Networks (GAN), where the goal is to pit two neural networks against each other so that they find the best way to generate an object from a label or a simple drawing, or mimick the style of an artist.

The first ripple in the vast ocean of possibility was Deep Dream, though it wasn’t technically a GAN:

Now, things have evolved even more, and you can not only generate trippy videos, but also use neural network to emulate the style of an artist and generate from scratch content that is indeed appealing!

Continue reading


Self-reference is cornerstone in Hofstadter’s Godel-Escher-Bach, a must read book for anyone interested in logic (and we shall rely logic in these days to stay sane.)

Here’s a bunch of examples of self-reference that I found interesting, curated just for you!

Barber’s paradox:

The barber is the “one who shaves all those, and those only, who do not shave themselves.” The question is, does the barber shave himself?

Self-referential figure (via xkcd):

Tupper’s formula that prints itself on a screen (via Brett Richardson) Continue reading

Learning Deep

In the past four years, there’s been a lot of progress in the field of machine learning, and here’s a story seen from the outskirts.

Eight years ago, for a mock start-up project, we tried to do some basic headtracking. At that time, my professor Stéphane Mallat told us that the most efficient way to do this was the Viola-Jones algorithm, which was still based on hard-coded features (integral images and Haar features) and a hard classifier (adaboost.)

(I was thrilled when a few years later Amazon Firephone was embedding similar features; unfortunately, this was a complete bomb — better technologies now exist and will make a splash pretty soon.)

By then, the most advanced book on machine learning was “Information Theory, Inference, and Learning” by David McKay, a terrific book to read, and also “Pattern Recognition and Machine Learning” by Chris Bishop (which I never read past chapter 3, lack of time.)

Oh boy, how things have changed!

Continue reading

Julia Language

A few years ago, I got interested in the then-nascent Julia language (, a new open source language based on Matlab syntax with C-like performances, thanks to its just-in-time compiler.

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 225x speedup

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 200x speedup (from

It now seems that the language is gaining traction, with many available packages, lots of REPL integration (it works with Atom+Hydrogen, and I suspect Jupyter gets its first initial from Julia and Python) and delivering on performances.

Julia is now used on supercomputers, such as Berkeley Lab’s NERSC, taught at MIT (by no less than Steven G Johnson, the guy who brought us FFTW and MEEP!), and I’ve noticed that some of the researchers from Harvard’s RoLi Lab I’ve invited to SPIE DCS 2018 are sharing their Julia code from their paper “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish“. Pretty cool!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

I got a chance to attend parts of Julia Con 2017 in Berkeley. I was amazed by how dynamic was the the community, in part supported by Moore’s foundation (Carly Strasser, now head of Coko Foundation), and happy to see Chris Holdgraf (my former editor at the Science Review) thriving at the Berkeley Institute for Data Science (BIDS).

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

I started sharing some code for basic image processing (JLo) on Github. Tell me what you think!

(by the way, I finally shared my meep scripts on github, and it’s here!)

Open Access

My article for the Berkeley Science Review on Open Access is out, and it is available here (for free, of course!): Science to the people.

“Astronomers and physicists have been sharing pre-prints since before the web existed,” says Alberto Pepe, founder of the authoring and pre-printing platform Authorea. “Pre-prints are an effective (and fully legal) way to make open access a reality in all scholarly fields.” Within hours, articles are available online, and scientists can interact with the author, leaving comments and feedback. Importantly, submission, storage, and access are all free. The pre-printing model ensures that an author’s work is visible and properly indexed by a number of tools, such as Google Scholar.

Special thanks to Rachael Samberg from thee UC Library and Alberto Pepe from Authorea.

Things seems to change quickly in that field, thanks to institutional efforts:

Here’s a list of resources that I’ve compiled from the talk by Laurence Bianchini from MyScienceWork when I invited at LBL, and a piece written by Nils Zimmerman on Open Access at LBNL: Open Access publishing at Berkeley Lab.

Farewell to BPEP!

Yesterday, I’ve organized my last event with the Berkeley Postdoc Entrepreneurial Program (, an association dedicated to helping young researchers turning their science into companies that can benefit the economy directly. I served for about two years as the liaison for Berkeley Lab, and helped organize over a dozen events, directly responsible for four of them (on government funding, intellectual property, the art of pitch, and lastly a job fair.)

BPEP team with UC Berkeley vice-chancellor for reasearch, Paul Alivisatos

BPEP team with UC Berkeley vice-chancellor for research, Paul Alivisatos

Continue reading

Berkeley Lab Postdoc Association

Dear reader,

I haven’t been very communicative lately, for I was kept busy by a very cool new venture : the birth of the Berkeley Lab Postdoc Association. The new association is meant to bring together over a thousand postdocs at Berkeley Lab, and provide them with support, career advice and bring feedback to the lab management about issues encountered by postdocs.logo_blpaNow that the association is alive and well (see the blog), I can tell a little about its story.

Continue reading

Wanted : science tools for the digital age

The internet may still be less than 10,000-days old, it still fails to deliver for scientists.

By empowering institutions to efficiently track down the number of publications, pushing even further the drive to publish many half-baked ideas and follow the hype instead of long-term research. It is true that it had never been as simple to get access to a paper and makes life easier on many aspects– collaboration often just requires sending an email, but new hurdles have appeared, and these should be removed.05e2e400dd1165870b3787a527e4e753Here is a bunch of ideas on how to use the new digital tools we have at hand to make research easier and thus more efficient, and a limited overview of what we have now. Continue reading

Seminar & Luminaries

Here’s a bunch of resource where you can find cool seminars by some hardcore scientists (I mean not the kind of pseudo-scientific, inspirational talks that you’ll find over the web).

While I'm about to ask a question to Leon:)

yup, that’s me

In English

Feynman’s talk are an endless source of excitment. Project Tuva are a must-see, but many other videos are available on Youtube (Fun to Imagine, the pleasure of finding things out)

The Chua’s Lecture – Very recent lectures on memristors and chaos– fascinating !

The Royal Institution – very cool videos about science

Edge –  There’s a lot of cool videos on a wide variety of subjects

In French – En Français

Les Ernest – L’ENS offre un grand nombre video (15 min) sur toute sorte de sujets, par les plus grands experts du domaine.

Seminaire General du departement de Physique de l’Ecole Polytechnique – traitement nettement plus poussé (1h) sur un sujet particulier de la physique.

Enjoy !