A few years ago, I got interested in the then-nascent Julia language (julialang.org), a new open source language based on Matlab syntax with C-like performances, thanks to its just-in-time compiler.
Large Synoptic Survey Telescope (LSST, Chile) data being processed with Julia on super computers with 200x speedup (from https://arxiv.org/pdf/1611.03404.pdf)
It now seems that the language is gaining traction, with many available packages, lots of REPL integration (it works with Atom+Hydrogen, and I suspect Jupyter gets its first initial from Julia and Python) and delivering on performances.
Julia is now used on supercomputers
, such as Berkeley Lab’s NERSC, taught at MIT
(by no less than Steven G Johnson
, the guy who brought us FFTW
!), and I’ve noticed that some of the researchers from Harvard’s RoLi Lab
I’ve invited to SPIE DCS 2018
are sharing their Julia code from their paper “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish
“. Pretty cool!
Julia used for code-sharing in a Nature publication. I wish I could see that every day!
I got a chance to attend parts of Julia Con 2017 in Berkeley. I was amazed by how dynamic was the the community, in part supported by Moore’s foundation (Carly Strasser, now head of Coko Foundation), and happy to see Chris Holdgraf (my former editor at the Science Review) thriving at the Berkeley Institute for Data Science (BIDS).
Julia picking up speed at Intel (picture taken dusing JuliaCon 2017)
I started sharing some code for basic image processing (JLo) on Github. Tell me what you think!
(by the way, I finally shared my meep scripts on github, and it’s here