Prediction is very difficult, especially if it’s about the future– Niels Bohr
Prediction is very difficult, especially if it’s about the future– Niels Bohr
This last month, I received two awards related to mentorship from Berkeley Lab. They both came as a surprise, since I consider myself more a student of mentorship than someone who has something to show for.
Berkeley Lab Outstanding Mentorship AwardDirector’s award for For building the critical foundations of a complex mentoring ecosystemI began to be interested in mentorship after I realized that mentorship plays a large role in the success of young scientist, (1) having experience myself the difference between having no mentorship and having appropriate mentorship (I’ll be forever grateful to my mentor/colleague/supervisor Ken Goldberg), (2) having had tepid internship supervision experience due to the lack of guidance, (3) realizing that academia is ill-equipped to provide the resources necessary for success.While I was running Berkeley Lab Series X, I always asked the speakers (typically Nobel prize laureates, stellar scientists and directors of prominent research institutions) how they learned to manage a group, and they answer was generally: “on the spot, via trial and error”, what struck me as awfully wrong. If people don’t get the proper resources/training, many are likely to fail, and drag their own group down the abyss. In this post, I will try to share resources I gathered along the years, and what I learned about mentorship, and provide some resources I found useful. This is more descriptive of my experience than prescriptive, but I hope you find this useful. Continue readingIt’s been a few months since the ChatGPT craze started, and we’re finally seeing some interesting courses and guidelines, particularly for coding, where I found the whole thing quite impressive.
Here’s a few that can be of interest, potentially growing over time (this is mostly a notes to self.)
Plus – things are getting really crazy: Large language models encode clinical knowledge (Nature, Google Research.)
There’s a lot of things happening on the front of AI for Big Science (AI for large scale facilities, such as synchrotrons.)
The recently published DOE report in AI for Science, Energy, and Security Report provides interesting insights, and a much-needed update to the AI for Science Report of 2020.Computing Facilities are upgrading to provide scientists the tools to engage with the latest advances in machine learning. I recently visited NERSC’s Perlmutter supercomputer, and it is LOADED with GPU for AI training.Meanwhile, companies with large computing capabilities are making interesting forays in using AI for science, for instance Meta, which is developing OpenCatalyst in collaboration with Carnegie-Mellon University, where the goal is to create AI models to speed up the study of catalysts, which are generally very computer-intensive (see the Berkeley Lab Materials Project.) Now the cool part is to verify these results using x-ray diffraction at a synchrotron facilities. Something a little similar happened with AlphaFold where newly derived structure may need to be tested with x-rays at the Advanced Light Source: Deep-Learning AI Program Accurately Predicts Key Rotavirus Protein Fold (ALS News)
Continue readingLast week I was lucky to meet with Vanessa Chan, the Chief Commercialization Officer for the Department of Energy and Director of the Office of Technology Transitions. She wanted to hear what kind of hurdles when it comes to start a company (hint: a lot.) I told her that a major, overlooked issue is that you generally to be a permanent resident to start at company in the US, whereas two-thirds of postdocs are foreign nationals and on visas. There are ways to get around the requirement (such as Unshackled), but it’s a little sad not more is done to provide support to those willing and able (plus – it is a well-known trope that many US companies are founded by foreign nationals, what I tend to believe is among what sets California apart from other states and other countries, where entrepreneurship doesn’t flourish as much as expected despite many efforts)
I am a native French speaker, and I have always been confused by the ubiquity of English, language which is actually quite difficult to speak (why is tough, though, thought and enough so different?) And I was also puzzled the difference between liberty and freedom – no one could ever explain me the difference, even though “Freedom” is probable the most overused concept in American society (French has “Liberté” in its national motto, but is has nothing to do with “free” as in “free sample.”)
Finally, I found an interesting explanation by Jorge Luis Borges, who sees this as a feature, not a bug:I really enjoy this notion of physicality – onomatopoeia are a vibrant part of the language: whisper, gulp, slam, rumble, slushy, etc.I have done most of my reading in English. I find English a far finer language than Spanish.
Firstly, English is both a Germanic and a Latin language. Those two registers—for any idea you take, you have two words. Those words will not mean exactly the same. For example if I say “regal” that is not exactly the same thing as saying “kingly.” Or if I say “fraternal” that is not the same as saying “brotherly.” Or “dark” and “obscure.” Those words are different. It would make all the difference—speaking for example—the Holy Spirit, it would make all the difference in the world in a poem if I wrote about the Holy Spirit or I wrote the Holy Ghost, since “ghost” is a fine, dark Saxon word, but “spirit” is a light Latin word. Then there is another reason.
The reason is that I think that, of all languages, English is the most physical of all languages. You can, for example, say “He loomed over.” You can’t very well say that in Spanish.
And then you have, in English, you can do almost anything with verbs and prepositions. For example, to “laugh off,” to “dream away.” Those things can’t be said in Spanish. To “live down” something, to “live up to” something—you can’t say those things in Spanish. They can’t be said. Or really in any Roman language.
Yesterday I invited Tanya Zimbardo from the San Francisco Museum of Modern Art to give a talk at Berkeley Lab (details about the even can be found here: Hybrid Forms: Connecting Art and Science)
It was quite interesting to hear her perspective on a topic which is close to my heart, and happy to hear many references to Rafael Lozano-Hemmer, who currently has the Techs-Mechs exhibition running at the Gray Area, but also quite surprising not hear anything about Jim Campbell (whose art glows atop the Salesforce building “Eye of Sauron”) or the work of Illuminate.
Continue readingIn a our last paper, we present the use of machine learning to get the most of x-ray adaptive optics – and it works like magic! This was a great work accomplished by Gautam Gunjala, a grad student from UC Berkeley under a SCGSR grant, together with our wonderful colleagues from the Advanced Photon Source.
X-ray adaptive mirrors are very nice, because they allow to correct the shape of x-ray beams, when the beam gets distorted by mirror deformation or misalignment. That’s why we want to use them in the latest generation of synchrotron light source such as ALS-U or APS-U. Continue readingWe were pleased, as Berkeley Lab Global Employee Resource Group co-chairs, to invite and co-organize with Angela Saini at Berkeley Lab on November 9th, 2022.
More details about the event:
global.lbl.gov/events/idea-speakers-series-angela-saini