Category Archives: projects

Greater Caribbean Light Source

Last week I hosted Leo Violini, the founder of the Centro Internacional de Física in Bogotà (Columbia), and a proponent of the the Greater Caribbean Light Source

Big science in Latin America: accelerate particles and progress – Nature (March 2024)

Here is a video of his talk on the proposal for Greater Caribbean Light Source:

And a video of his second talk on science diplomacy:

Rise of the Machines

Recently, there’s been a lot of interesting activity in the field generative AI for science from large companies such as Google, Meta and Microsoft.
Creating new materials from scratch is difficult, since materials involve complex interactions that are difficult to simulate, or a fair amount of luck in experiments (serendipity is scientists’s most terrifying friend)
Thus most of these efforts aim to discover new material by accelerating simulations using machine learning. But recent advances (such as LLM, e.g., ChatGPT) have shown that you can use AI to make coherent sentences instead of a word soup. But the same way cooking is not just about putting ingredient together all at once but carefully preparing them, making a new material involves important intermediate steps.  And new approaches can be used create new materials.

The various steps of making a new material (from Szymanski et al.)

Last month, Google in collaboration with Berkeley Lab announced that their DeepMind’s Gnome project had discovered a lot of new structures: Google DeepMind Adds Nearly 400,000 New Compounds to Berkeley Lab’s Materials Project. They managed to actually make and analyze some of those new materials ; that is quite a tour de force, and while there’s some interesting pushback on the claims, it’s still pretty cool!
In September, I invited Meta’s Open Catalyst at Berkeley Lab (here’s the event description and the recording – accessible to lab employees only)

Zachary Ulissi (Meta/OpenCatalyst) and Jin Qian (Berkley Lab) at Lawrence Berkeley National Laboratory (September 2023)

Meanwhile, Microsoft is collaborating with Pacific Northwest National Laboratory on similar topics
Meanwhile, the research infrastructure has it gears moving; it seems that DeepMind’s AlphaFold is already routinely used at the lab to dream up new protein structures. I wonder where this will go!
Prediction is very difficult, especially if it’s about the future
– Niels Bohr
Thinkpieces blending chips and AI in full bloom:
We need a moonshot for computing – Brady Helwig and PJ Maykish,  Technology Review

The Shadow of Bell Labs

I want to resurface an interesting thread by my former colleague Ilan Gur:

Continue reading

On Mentorship

This last month, I received two awards related to mentorship from Berkeley Lab. They both came as a surprise, since I consider myself more a student of mentorship than someone who has something to show for.

Berkeley Lab Outstanding Mentorship Award

Director’s award for For building the critical foundations of a complex mentoring ecosystem

I began to be interested in mentorship after I realized that mentorship plays a large role in the success of young scientist, (1) having experience myself the difference between having no mentorship and having appropriate mentorship (I’ll be forever grateful to my mentor/colleague/supervisor Ken Goldberg), (2) having had tepid internship supervision experience due to the lack of guidance, (3) realizing that academia is ill-equipped to provide the resources necessary for success.

While I was running Berkeley Lab Series X, I always asked the speakers (typically Nobel prize laureates, stellar scientists and directors of prominent research institutions) how they learned to manage a group, and they answer was generally: “on the spot, via trial and error”, what struck me as awfully wrong. If people don’t get the proper resources/training, many are likely to fail, and drag their own group down the abyss. In this post, I will try to share resources I gathered along the years, and what I learned about mentorship, and provide some resources I found useful. This is more descriptive of my experience than prescriptive, but I hope you find this useful.

Continue reading

Lamaseries

It’s been a few months since the ChatGPT craze started, and we’re finally seeing some interesting courses and guidelines, particularly for coding, where I found the whole thing quite impressive.

https://static.tvtropes.org/pmwiki/pub/images/llama_loogie_tintin.jpg

Ad hoc use of LLaMa

Here’s a few that can be of interest, potentially growing over time (this is mostly a notes to self.)

Plus – things are getting really crazy: Large language models encode clinical knowledge (Nature, Google Research.)

 

Updates on AI for big science

There’s a lot of things happening on the front of AI for Big Science (AI for large scale facilities, such as synchrotrons.)

The recently published DOE report in AI for Science, Energy, and Security Report provides interesting insights, and a much-needed update to the AI for Science Report of 2020.

Computing Facilities are upgrading to provide scientists the tools to engage with the latest advances in machine learning. I recently visited NERSC’s Perlmutter supercomputer, and it is LOADED with GPU for AI training.

A rack of Tesla A100 from the Perlmutter supercomputer at NERSC/Berkeley Lab

Meanwhile, companies with large computing capabilities are making interesting forays in using AI for science, for instance Meta, which is developing OpenCatalyst in collaboration with Carnegie-Mellon University, where the goal is to create AI models to speed up the study of catalysts, which are generally very computer-intensive (see the Berkeley Lab Materials Project.) Now the cool part is to verify these results using x-ray diffraction at a synchrotron facilities. Something a little similar happened with AlphaFold where newly derived structure may need to be tested with x-rays at the Advanced Light Source: Deep-Learning AI Program Accurately Predicts Key Rotavirus Protein Fold (ALS News)

Continue reading

Out Of Many

Last week I was lucky to meet with Vanessa Chan, the Chief Commercialization Officer for the Department of Energy and Director of the Office of Technology Transitions. She wanted to hear what kind of hurdles when it comes to start a company (hint: a lot.) I told her that a major, overlooked issue is that you generally to be a permanent resident to start at company in the US, whereas two-thirds of postdocs are foreign nationals and on visas. There are ways to get around the requirement (such as Unshackled), but it’s a little sad not more is done to provide support to those willing and able (plus – it is a well-known trope that many US companies are founded by foreign nationals, what I tend to believe is among what sets California apart from other states and other countries, where entrepreneurship doesn’t flourish as much as expected despite many efforts)

Conversation with Vanessa Chan

Two ways to say things

I am a native French speaker, and I have always been confused by the ubiquity of English, language which is actually quite difficult to speak (why is tough, though, thought and enough so different?) And I was also puzzled the difference between liberty and freedom – no one could ever explain me the difference, even though “Freedom” is probable the most overused concept in American society (French has “Liberté” in its national motto, but is has nothing to do with “free” as in “free sample.”)

Finally, I found an interesting explanation by Jorge Luis Borges, who sees this as a feature, not a bug:

I have done most of my reading in English. I find English a far finer language than Spanish.

Firstly, English is both a Germanic and a Latin language. Those two registers—for any idea you take, you have two words. Those words will not mean exactly the same. For example if I say “regal” that is not exactly the same thing as saying “kingly.” Or if I say “fraternal” that is not the same as saying “brotherly.” Or “dark” and “obscure.” Those words are different. It would make all the difference—speaking for example—the Holy Spirit, it would make all the difference in the world in a poem if I wrote about the Holy Spirit or I wrote the Holy Ghost, since “ghost” is a fine, dark Saxon word, but “spirit” is a light Latin word. Then there is another reason.

The reason is that I think that, of all languages, English is the most physical of all languages. You can, for example, say “He loomed over.” You can’t very well say that in Spanish.

And then you have, in English, you can do almost anything with verbs and prepositions. For example, to “laugh off,” to “dream away.” Those things can’t be said in Spanish. To “live down” something, to “live up to” something—you can’t say those things in Spanish. They can’t be said. Or really in any Roman language.

(thanks Jordan Poss for the transcription!)

I really enjoy this notion of physicality – onomatopoeia are a vibrant part of the language: whisper, gulp, slam, rumble, slushy, etc.

Wovon man nicht sprechen kann, darüber muss man schweigen.
Whereof one cannot speak, thereof one must be silent.
– Ludwig Wittgenstein

Let’s all clap for Borges!

SFMOMA x Berkeley Lab: Hybrid forms

Yesterday I invited Tanya Zimbardo from the San Francisco Museum of Modern Art to give a talk at Berkeley Lab (details about the even can be found here: Hybrid Forms: Connecting Art and Science)

Tanya Zimbardo (SFMONA) at Berkeley Lab

It was quite interesting to hear her perspective on a topic which is close to my heart, and happy to hear many references to Rafael Lozano-Hemmer, who currently has the Techs-Mechs exhibition running at the Gray Area, but also quite surprising not hear anything about Jim Campbell (whose art glows atop the Salesforce building “Eye of Sauron”) or the work of Illuminate.

Continue reading

Using machine learning to achieve diffraction-limited performance with x-rays deformable mirrors

In a our last paper, we present the use of machine learning to get the most of x-ray adaptive optics – and it works like magic! This was a great work accomplished by Gautam Gunjala, a grad student from UC Berkeley under a SCGSR grant, together with our wonderful colleagues from the Advanced Photon Source.

X-ray adaptive mirrors are very nice, because they allow to correct the shape of x-ray beams, when the beam gets distorted by mirror deformation or misalignment. That’s why we want to use them in the latest generation of synchrotron light source such as ALS-U or APS-U.

Continue reading

Angela Saini at Berkeley Lab

We were pleased, as Berkeley Lab Global Employee Resource Group co-chairs, to invite and co-organize with Angela Saini at Berkeley Lab on November 9th, 2022.

Author Angela Saini in conversation with Aditi Chakravarti from the Diversity and Inclusion office at Berkeley Lab (IDEA)

More details about the event:
global.lbl.gov/events/idea-speakers-series-angela-saini