This year I’m chairing the Computational Imaging session at the SPIE Defense + Commercial Sensing, in Orlando, Fla., April 16-19, 2018, together with Aamod Shanker. We have invited a lot of amazing speakers and we are organizing a panel discussion on the trends in computational imaging.Here’s the program:SESSION 6 TUE APRIL 17, 2018 – 11:10 AM TO 12:00 PM
With increasingly tight beamline specifications, optical modeling software becomes necessary in order to design and predict the performances of conceptual beamlines. This becomes particularly true with the advent of highly coherent light sources (such the proposed upgrade of the ALS), where additional considerations such mirror deformation under heat load and effects of partial coherence needs to be studied. Luca Rebuffi will present the latest features of OASYS/Shadow, an optical beamline modeling tool widely used in the synchrotron community and show how to get started with beamline simulations.https://github.com/awojdyla/ALS-U_ExamplesProgram: Continue reading
Self-reference is cornerstone in Hofstadter’s Godel-Escher-Bach, a must read book for anyone interested in logic (and we shall rely logic in these days to stay sane.)Here’s a bunch of examples of self-reference that I found interesting, curated just for you!Barber’s paradox:
The barber is the “one who shaves all those, and those only, who do not shave themselves.” The question is, does the barber shave himself?
Self-referential figure (via xkcd):Tupper’s formula that prints itself on a screen (via Brett Richardson) Continue reading
Over the years I’ve collected quotes from people who are.I always like quotes, because they are atoms of knowledge, quick and dirty ways to understand the world we only have one life to explore. To some extent, they axioms of life in that they are true and never require an explanation (otherwise they wouldn’t be quotations.)Here’s a bunch of quotes that I found particularly interesting, starting with my absolute favorite quote comes from the great Paul Valery:
The folly of mistaking a paradox for a discovery, a metaphor for a proof, a torrent of verbiage for a spring of capital truths, and oneself for an oracle, is inborn in us. – Paul Valery
Basic research is like shooting an arrow into the air and, where it lands, painting a target.
-Homer Burton Adkins
In the past four years, there’s been a lot of progress in the field of machine learning, and here’s a story seen from the outskirts.Eight years ago, for a mock start-up project, we tried to do some basic headtracking. At that time, my professor Stéphane Mallat told us that the most efficient way to do this was the Viola-Jones algorithm, which was still based on hard-coded features (integral images and Haar features) and a hard classifier (adaboost.)
By then, the most advanced book on machine learning was “Information Theory, Inference, and Learning” by David McKay, a terrific book to read, and also “Pattern Recognition and Machine Learning” by Chris Bishop (which I never read past chapter 3, lack of time.)Oh boy, how things have changed! Continue reading
It now seems that the language is gaining traction, with many available packages, lots of REPL integration (it works with Atom+Hydrogen, and I suspect Jupyter gets its first initial from Julia and Python) and delivering on performances.Julia is now used on supercomputers, such as Berkeley Lab’s NERSC, taught at MIT (by no less than Steven G Johnson, the guy who brought us FFTW and MEEP!), and I’ve noticed that some of the researchers from Harvard’s RoLi Lab I’ve invited to SPIE DCS 2018 are sharing their Julia code from their paper “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish“. Pretty cool!
I got a chance to attend parts of Julia Con 2017 in Berkeley. I was amazed by how dynamic was the the community, in part supported by Moore’s foundation (Carly Strasser, now head of Coko Foundation), and happy to see Chris Holdgraf (my former editor at the Science Review) thriving at the Berkeley Institute for Data Science (BIDS).
I started sharing some code for basic image processing (JLo) on Github. Tell me what you think!(by the way, I finally shared my meep scripts on github, and it’s here!)
Some may have been wondering what I have been up to lately!
At the beginning of the year, I started working on the ALS-U project, which is the upgrade of the Advanced Light Source, the main synchrotron at Lawrence Berkeley National Laboratory. The goal is to improve the facility with a Diffraction-Limited Storage Ring (DLSR), in order to increase the brilliance of the beam, so as allow scientists from all over the world to perform the most precise experiments, allowing bright and full coherent beams with diameters as small as 10 nanometer, or twice the width of a strand of DNA. (here’s a report on all the niceties you can do with such a tool: ALS-U: Solving Scientific Challenges with Coherent Soft X-Rays)
I’ve run my first quantum computation!Since I was working on the latest iteration of classical computer manufacturing techniques (EUV lithography), everyone asked me what were my thought on the future of Moore’s law, and what did I think about quantum computing. To the first question, I could mumble things about transistor size and the fact that we’re getting awfully close to the atomic size; to the latter question… I just had to go figure out myself!Back in April, I’ve invited Irfan Siddiqi (qnl.berkeley.edu), founding director of the brand new Center for Quantum Coherent Science, and his postdocs at Berkeley lab to give a talk to postdocs, and last the lab announced the first 45-qubits quantum simulations on the NERSC… things are going VERY fast! (read the Quantum manifesto) Continue reading
“Astronomers and physicists have been sharing pre-prints since before the web existed,” says Alberto Pepe, founder of the authoring and pre-printing platform Authorea. “Pre-prints are an effective (and fully legal) way to make open access a reality in all scholarly fields.” Within hours, articles are available online, and scientists can interact with the author, leaving comments and feedback. Importantly, submission, storage, and access are all free. The pre-printing model ensures that an author’s work is visible and properly indexed by a number of tools, such as Google Scholar.
Here’s a list of resources that I’ve compiled from the talk by Laurence Bianchini from MyScienceWork when I invited at LBL, and a piece written by Nils Zimmerman on Open Access at LBNL: Open Access publishing at Berkeley Lab.