Category Archives: english

East Bay Express Arts & Ads

During the five years (already!) I’ve lived in Berkeley, I’ve always be faithful to the East Bay Express (EBX), which stayed strong when the Bay Guardian went down. I have great memories of columns from Anna Pulley, the culture notes from Sarah Burke, and the movie critics from Kelly Vance.

In these years, I’ve collected a cuts from the paper, which I believe capture the atmosphere of the East Bay ca 2015. Here’s a few:

Continue reading

Addictions

Around me, all I see is the blue glow of phones illuminating the faces of people.

These days, we get so many things to discover, information to watch, and we’re getting addicted.

What I fault newspapers for is that day after day they draw our attention to insignificant things whereas only three or four times in our lives do we read a book in which there is something really essential. – Marcel Proust, In Search of Lost Time

Continue reading

Art and science (IX) – Neural networks

This is a continuation of a series of blog posts, written mostly in French, about arts and science

In the past few years, we’ve seen the emergence of Deep Neural Networks (DNN), and the latest developments are Generative Adverserial Networks (GAN), where the goal is to pit two neural networks against each other so that they find the best way to generate an object from a label or a simple drawing, or mimick the style of an artist.

The first ripple in the vast ocean of possibility was Deep Dream, though it wasn’t technically a GAN:

Now, things have evolved even more, and you can not only generate trippy videos, but also use neural network to emulate the style of an artist and generate from scratch content that is indeed appealing!

Continue reading

SPIE DCS 2018: CCSI – Computational Imaging

This year I’m chairing the Computational Imaging session at the SPIE Defense + Commercial Sensing, in Orlando, Fla., April 16-19, 2018, together with Aamod Shanker. We have invited a lot of amazing speakers and we are organizing a panel discussion on the trends in computational imaging.

Here’s the program:

SESSION 6 TUE APRIL 17, 2018 – 11:10 AM TO 12:00 PM
Computational Imaging I
[10656-22] “Ultra-miniature…”David G. Stork, Rambus Inc. (USA)
[10656-36] “Computed axial lithography: volumetric 3D printing of arbitrary geometries” Indrasen Bhattacharya
Lunch/Exhibition Break Tue 12:00 pm to 1:50 pm

SESSION 7 TUE APRIL 17, 2018 – 1:50 PM TO 3:30 PM
Computational Imaging II
[10656-24] “Terahertz radar for imaging…”Goutam Chattopadhyay
[10656-23] “Computational imaging…” Lei Tian
[10656-26] “Achieving fast high-resolution 3D imaging” Dilworth Y. Parkinson
[10656-27] “Linear scattering theory in phase space” Aamod Shanker

PANEL DISCUSSION TUE APRIL 17, 2018 -4:00 PM TO 6:00 PM

TUESDAY POSTER SESSION TUE 6:00 PM TO 8:00 PM

SESSION 8 WED APRIL 18, 2018 – 8:00 AM TO 10:05 AM
Computational Imaging III
[10656-28] “High resolution 3D imaging…” Michal Odstrcil
[10656-29] “A gigapixel camera array…” Roarke Horstmeyer
[10656-30] “EUV photolithography mask inspection using Fourier ptychography” Antoine Wojdyla,
[10656-31] “New systems for computational x-ray phase imaging…” Jonathan C. Petruccelli,
[10656-68] “Low dose x-ray imaging by photon counting detector”, Toru Aoki

Continue reading

Oasys

With increasingly tight beamline specifications, optical modeling software becomes necessary in order to design and predict the performances of conceptual beamlines. This becomes particularly true with the advent of highly coherent light sources (such the proposed upgrade of the ALS), where additional considerations such mirror deformation under heat load and effects of partial coherence needs to be studied. Luca Rebuffi will present the latest features of OASYS/Shadow, an optical beamline modeling tool widely used in the synchrotron community and show how to get started with beamline simulations.

https://github.com/awojdyla/ALS-U_Examples

Program: Continue reading

Fump Truck

Screen capture from a presidential stump speech

 

A car in Berkeley

Self-reference

Self-reference is cornerstone in Hofstadter’s Godel-Escher-Bach, a must read book for anyone interested in logic (and we shall rely logic in these days to stay sane.)

Here’s a bunch of examples of self-reference that I found interesting, curated just for you!

Barber’s paradox:

The barber is the “one who shaves all those, and those only, who do not shave themselves.” The question is, does the barber shave himself?

Self-referential figure (via xkcd):

Tupper’s formula that prints itself on a screen (via Brett Richardson) Continue reading

Moore’s wall

A single chip such has Intel Xeon Phi has a computational power in excess of 1TFLOPS and features more than a hundred billion transistors. Few people  outside the world of semi-conductor engineering appreciate this, but that is a fantastical number: 100,000,000,000. If every transistor was a pixel, you would need a wall 0f 100 x 100 4K TV screen to display them all!

Over the past fifty years, the semiconductor industry has achieved incredible things, in part thanks to planar technology, which allowed to exponentially scale the manufacturing process, following Moore’s law. But it seems that we’re about to hit a wall soon.

faith_no_moore

Let’s give an overview of where we stand, and where do we go from here!

Continue reading

Quick’n’dirty

Over the years I’ve collected quotes from people who are.

I always like quotes, because they are atoms of knowledge, quick and dirty ways to understand the world we only have one life to explore. To some extent, they axioms of life in that they are true and never require an explanation (otherwise they wouldn’t be quotations.)

Here’s a bunch of quotes that I found particularly interesting, starting with my absolute favorite quote comes from the great Paul Valery:

The folly of mistaking a paradox for a discovery, a metaphor for a proof, a torrent of verbiage for a spring of capital truths, and oneself for an oracle, is inborn in us. – Paul Valery

On research — trial and error

Basic research is like shooting an arrow into the air and, where it lands, painting a target.
-Homer Burton Adkins

A thinker sees his own actions as experiments and questions–as attempts to find out something. Success and failure are for him answers above all.
– Friedrich Nietzsche
Continue reading

Learning Deep

In the past four years, there’s been a lot of progress in the field of machine learning, and here’s a story seen from the outskirts.

Eight years ago, for a mock start-up project, we tried to do some basic headtracking. At that time, my professor Stéphane Mallat told us that the most efficient way to do this was the Viola-Jones algorithm, which was still based on hard-coded features (integral images and Haar features) and a hard classifier (adaboost.)

(I was thrilled when a few years later Amazon Firephone was embedding similar features; unfortunately, this was a complete bomb — better technologies now exist and will make a splash pretty soon.)

By then, the most advanced book on machine learning was “Information Theory, Inference, and Learning” by David McKay, a terrific book to read, and also “Pattern Recognition and Machine Learning” by Chris Bishop (which I never read past chapter 3, lack of time.)

Oh boy, how things have changed!

Continue reading