Prediction is very difficult, especially if it’s about the future– Niels Bohr
Prediction is very difficult, especially if it’s about the future– Niels Bohr
Actibus immensis urbs fulget masiliensis“The city of Marseille shines through his great achievements”
These days things are getting pretty busy on my end – so many cool projects to engage with and only 24 hours a day.
And you end up doing more things that you can accomplish. The reason often lies in the unrealistic assessment of the time it would take to complete a task, and I came across the “pi” rule, initially posited by my mentor Ken, with a pretty neat explanation from my colleague Val:If you estimate it will take one unit of time to complete a task, the task will effectively take 3.14 (≈π) times more than you initially anticipated.
The reason for the difference between dream and reality is that we generally do not factor in:
Taken together with the times its take to accomplish a task, you end up with roughly a factor three – and you end up feeling terrible during the week-ends trying to catch up what you were set to do during the week, but got busy doing (1) or (2)
A corollary of the pi rule is the “next up” rule: if you work on project with a relatively large team, it generally takes the next unit of time to complete it (e.g. one hour become one day; one day becomes a week; a week becomes a months), generally because of the friction at the interfaces. Reducing these frictions at the interfaces should therefore be a priority.Weekends are a constant battle between “I should do all the work I put off during the week!” and “taking time off is important to recharge!” but it’s okay because I usually find a nice compromise in which I lounge around feeling guilty and accomplishing nothing
— Katie Mack (@AstroKatie) August 15, 2021
I recently learned that my colleague Bertrand Nicquevert has worked extensively on a model to describe interactions between various counterparts:
Modelling engineering interfaces in big science collaborations at CERN: an interaction-based modelI was lucky to meet the President of the University of California Michael V. Drake, in my capacity of co-chair of the Berkeley Lab Global Employee Resource group (global.lbl.gov), dedicated to providing support to international employees at the national laboratory.
I made the point the re-building communities should be a priority after the pandemic, and particularly early career scientists, who do not have family or go to school where they could thread their social fabric. The participation of international scientist at Berkeley Lab is an important strength, because the national lab is de facto at the center of international research, and that gives it a competitive edge compared to other countries such as China or Saudi Arabia, where large research expenditure cannot compensate the lack of free flow of ideas.
I think my talking points were well received, and president Drake encouraged collaboration on these topics with the University of California, BerkeleyI’ve read an interesting piece on Twitter from the always excellent Kareem Carr on the ladder of causation. I found it very interesting, because it allows you to go beyond the mantra “corelation is not causation“, and links statistics to the concept of falsifiability that Thomas Kuhn puts as central to sciences.
The Ladder of Causation has three levels:
1. Association. This involves the prediction of outcomes as a passive observer of a system.2. Intervention. This involves the prediction of the consequences of taking actions to alter the behavior of a system.3. Counterfactuals. This involves prediction of the consequences of taking actions to alter the behavior of a system had circumstances been different.I even read the book from which – “The Book of Why” [Full book on the Internet Archive] by Judea Pearl, a Turing prize recipient who worked on Bayesian network. The book quite illuminating, mentioning a bit too often dark figures such as Galton, Pearson and Fisher (it seems statistician get really high on their own supply.)This last month, I received two awards related to mentorship from Berkeley Lab. They both came as a surprise, since I consider myself more a student of mentorship than someone who has something to show for.
Berkeley Lab Outstanding Mentorship AwardDirector’s award for For building the critical foundations of a complex mentoring ecosystemI began to be interested in mentorship after I realized that mentorship plays a large role in the success of young scientist, (1) having experience myself the difference between having no mentorship and having appropriate mentorship (I’ll be forever grateful to my mentor/colleague/supervisor Ken Goldberg), (2) having had tepid internship supervision experience due to the lack of guidance, (3) realizing that academia is ill-equipped to provide the resources necessary for success.While I was running Berkeley Lab Series X, I always asked the speakers (typically Nobel prize laureates, stellar scientists and directors of prominent research institutions) how they learned to manage a group, and they answer was generally: “on the spot, via trial and error”, what struck me as awfully wrong. If people don’t get the proper resources/training, many are likely to fail, and drag their own group down the abyss. In this post, I will try to share resources I gathered along the years, and what I learned about mentorship, and provide some resources I found useful. This is more descriptive of my experience than prescriptive, but I hope you find this useful. Continue readingIt’s been a few months since the ChatGPT craze started, and we’re finally seeing some interesting courses and guidelines, particularly for coding, where I found the whole thing quite impressive.
Here’s a few that can be of interest, potentially growing over time (this is mostly a notes to self.)
Plus – things are getting really crazy: Large language models encode clinical knowledge (Nature, Google Research.)
I leave Twitter for a few month, and the science world is all upside down!
The superconductivity community was simmering, with the news that a new compound name LK-99 may be superconducting at room temperature. Eventually, things quenched abruptly, but not without an interesting foray on how science works nowadays, some good takes and a decent media coverage.I first learn about it when I read an article in Ars Technica “What’s going on with the reports of a room-temperature superconductor?” where I saw the name of my friend Sinead popping up. She was in the spotlight because she had run some very complicated simulations to determine whether LK-99 could be a candidate for superconductivity, and found that the material has indeed some interesting features – volume collapse and flat bands – the latter being a common feature of superconductors.Alas, it seems that the results from the initial paper failed to be reproduced by other teams, who in passing found some interesting properties for this class of material. Inna Vishik, who was running the ALS UEC Seminar Series: Science Enabled by ALS-U with me, summarized it well:Now that I have a captive audience…(welcome new followers!) a monster thread on what my paper says, the approximations and the caveats… (1/aleph) https://t.co/twSIsn1Ho9
— Sinéad Griffin (@sineatrix) August 2, 2023
“The detective work that wraps up all of the pieces of the original observation — I think that’s really fantastic,” she says. “And it’s relatively rare.”
LK-99 isn’t a superconductor — how science sleuths solved the mystery – Nature
There’s a lot of things happening on the front of AI for Big Science (AI for large scale facilities, such as synchrotrons.)
The recently published DOE report in AI for Science, Energy, and Security Report provides interesting insights, and a much-needed update to the AI for Science Report of 2020.Computing Facilities are upgrading to provide scientists the tools to engage with the latest advances in machine learning. I recently visited NERSC’s Perlmutter supercomputer, and it is LOADED with GPU for AI training.Meanwhile, companies with large computing capabilities are making interesting forays in using AI for science, for instance Meta, which is developing OpenCatalyst in collaboration with Carnegie-Mellon University, where the goal is to create AI models to speed up the study of catalysts, which are generally very computer-intensive (see the Berkeley Lab Materials Project.) Now the cool part is to verify these results using x-ray diffraction at a synchrotron facilities. Something a little similar happened with AlphaFold where newly derived structure may need to be tested with x-rays at the Advanced Light Source: Deep-Learning AI Program Accurately Predicts Key Rotavirus Protein Fold (ALS News)
Continue reading