Category Archives: science

Self-reference

Self-reference is cornerstone in Hofstadter’s Godel-Escher-Bach, a must read book for anyone interested in logic (and we shall rely logic in these days to stay sane.)

Here’s a bunch of examples of self-reference that I found interesting, curated just for you!

Barber’s paradox:

The barber is the “one who shaves all those, and those only, who do not shave themselves.” The question is, does the barber shave himself?

Self-referential figure (via xkcd):

Tupper’s formula that prints itself on a screen (via Brett Richardson) Continue reading

Moore’s wall

A single chip such has Intel Xeon Phi has a computational power in excess of 1TFLOPS and features more than a hundred billion transistors. Few people  outside the world of semi-conductor engineering appreciate this, but that is a fantastical number: 100,000,000,000. If every transistor was a pixel, you would need a wall 0f 100 x 100 4K TV screen to display them all!

Over the past fifty years, the semiconductor industry has achieved incredible things, in part thanks to planar technology, which allowed to exponentially scale the manufacturing process, following Moore’s law. But it seems that we’re about to hit a wall soon.

faith_no_moore

Let’s give an overview of where we stand, and where do we go from here!

Continue reading

Quick’n’dirty

Over the years I’ve collected quotes from people who are.

I always like quotes, because they are atoms of knowledge, quick and dirty ways to understand the world we only have one life to explore. To some extent, they axioms of life in that they are true and never require an explanation (otherwise they wouldn’t be quotations.)

Here’s a bunch of quotes that I found particularly interesting, starting with my absolute favorite quote comes from the great Paul Valery:

The folly of mistaking a paradox for a discovery, a metaphor for a proof, a torrent of verbiage for a spring of capital truths, and oneself for an oracle, is inborn in us. – Paul Valery

On research — trial and error

Basic research is like shooting an arrow into the air and, where it lands, painting a target.
-Homer Burton Adkins

A thinker sees his own actions as experiments and questions–as attempts to find out something. Success and failure are for him answers above all.
– Friedrich Nietzsche
Continue reading

Learning Deep

In the past four years, there’s been a lot of progress in the field of machine learning, and here’s a story seen from the outskirts.

Eight years ago, for a mock start-up project, we tried to do some basic headtracking. At that time, my professor Stéphane Mallat told us that the most efficient way to do this was the Viola-Jones algorithm, which was still based on hard-coded features (integral images and Haar features) and a hard classifier (adaboost.)

(I was thrilled when a few years later Amazon Firephone was embedding similar features; unfortunately, this was a complete bomb — better technologies now exist and will make a splash pretty soon.)

By then, the most advanced book on machine learning was “Information Theory, Inference, and Learning” by David McKay, a terrific book to read, and also “Pattern Recognition and Machine Learning” by Chris Bishop (which I never read past chapter 3, lack of time.)

Oh boy, how things have changed!

Continue reading

Julia Language

A few years ago, I got interested in the then-nascent Julia language (julialang.org), a new open source language based on Matlab syntax with C-like performances, thanks to its just-in-time compiler.

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 225x speedup

Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 200x speedup (from https://arxiv.org/pdf/1611.03404.pdf)

It now seems that the language is gaining traction, with many available packages, lots of REPL integration (it works with Atom+Hydrogen, and I suspect Jupyter gets its first initial from Julia and Python) and delivering on performances.

Julia is now used on supercomputers, such as Berkeley Lab’s NERSC, taught at MIT (by no less than Steven G Johnson, the guy who brought us FFTW and MEEP!), and I’ve noticed that some of the researchers from Harvard’s RoLi Lab I’ve invited to SPIE DCS 2018 are sharing their Julia code from their paper “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish“. Pretty cool!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

Julia used for code-sharing in a Nature publication. I wish I could see that every day!

I got a chance to attend parts of Julia Con 2017 in Berkeley. I was amazed by how dynamic was the the community, in part supported by Moore’s foundation (Carly Strasser, now head of Coko Foundation), and happy to see Chris Holdgraf (my former editor at the Science Review) thriving at the Berkeley Institute for Data Science (BIDS).

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)

I started sharing some code for basic image processing (JLo) on Github. Tell me what you think!

(by the way, I finally shared my meep scripts on github, and it’s here!)

Sexism in academia

This year, the recipients of the Nobel Prize were 100% men. That’s at the same sad and scary; sad because, and scary because it seems that things are not changing at the pace they should.movie_shade

Continue reading

ALS-U

Some may have been wondering what I have been up to lately!
At the beginning of the year, I started working on the ALS-U project, which is the  upgrade of the Advanced Light Source, the main synchrotron at Lawrence Berkeley National Laboratory. The goal is to improve the facility with a Diffraction-Limited Storage Ring (DLSR), in order to increase the brilliance of the beam, so as allow scientists from all over the world to perform the most precise experiments, allowing bright and full coherent beams with diameters as small as 10 nanometer, or twice the width of a strand of DNA. (here’s a report on all the niceties you can do with such a tool: ALS-U: Solving Scientific Challenges with Coherent Soft X-Rays)

ALS-U logo Continue reading

SHARP & MET5 – EUV Lithography at Lawrence Berkeley National Laboratory

Over the past four years, I’ve been working on two of the EUV tools at the Center for X-Ray Optics, and while I’m moving to new projects, it’s time I give some explanations about what these two projects are about, the SHARP EUV microscope, and the 0.5NA Micro-Exposure Tool (MET5.)

A 6" EUV photomask

A 6″ EUV photomask

Continue reading

Qubit

I’ve run my first quantum computation!

Since I was working on the latest iteration of classical computer manufacturing techniques (EUV lithography), everyone asked me what were my thought on the future of Moore’s law, and what did I think about quantum computing. To the first question, I could mumble things about transistor size and the fact that we’re getting awfully close to the atomic size; to the latter question… I just had to go figure out myself!

Back in April, I’ve invited Irfan Siddiqi (qnl.berkeley.edu), founding director of the brand new Center for Quantum Coherent Science, and his postdocs at Berkeley lab to give a talk to postdocs, and last the lab announced the first 45-qubits quantum simulations on the NERSC… things are going VERY fast! (read the Quantum manifesto)

Kevin O'Brien on multiplexing qubit readouts

Kevin O’Brien on multiplexing qubit readouts

This is thanks to Rigetti, a full-stack quantum computer startup based in Berkeley (Wired, IEEE Spectrum).

Continue reading

Energy Dominance

To say the least, the mood is not at its peak at the lab…

We have a new Secretary of Energy – Rick Perry (R), former governor of Texas –  who doesn’t seem to care much about science (e.g. he believe it’s fine to question climate change; at least there’s someone to tell him no, it’s not) and who is now on a crusade to ensure #energydominance, a concept that I try to comprehend, but really can’t.

Now see his incredible op-ed in Washington Times (the black mirror of the New York Post, I guess:), Paving the path to U.S. energy dominance:

Mr. Trump wants America to utilize our abundant domestic energy resources and technological innovations for good, both at home and abroad. […] An energy-dominant America will export to markets around the world, increasing our global leadership and influence. Becoming energy dominant means that we are getting government out of the way so that we can share our energy wealth with developing nations. For years, Washington stood in the way of our energy dominance. That changes now.

Holy cow! That is a genius strategy!
Oh wait… what strategy? Selling coal and gas that will be worthless in three years?

Here’s what previous Secretary Moniz has to say:

Moniz: […] With some colleagues, we’re starting up a small non-profit in the energy space and this was also a question that we intended to look at.

However, a review of this type also needs to look at the emerging technologies. For example, the utility in Tucson recently announced a long-term, a 20-year purchase-power agreement for solar energy plus storage at a pretty attractive—stunning, actually in my view—price. They quoted less than 4.5 cents per kilowatt-hour, including the storage.

Madrigal: Wow. [In Arizona, the average cost of electricity in March 2017 was 9.7 cents per kilowatt-hour. Electricity prices vary around the nation, but the U.S. average was 10.3 cents per kilowatt-hour in March 2017.]

Meanwhile, the office of science at the White House is now empty. zero. nicht. kaput.

It is quite incredible to hear that, while a mere six most ago it was populated by the finest people I know, like my (extended) friend Maya Shankar

imfine

Oh boy, the second half of the year starts even better than the first half.