Any data scientists out there? What's your go to programming language and tools for your work?
No surprise I use python, but I've recently started experimenting with polars instead of pandas. I've enjoyed it so far, but Im not sure if the benefits for my team's work will be enough to outweigh the cost of moving from our existing pandas/numpy code over to polars.
I've also started playing with grafana, as a quick dashboarding utility to make some basic visualizations on some live production databases.
I'm not a data scientist but I support a handful. They all use Python for the most part, but a few of them (still?) use R. Then there's the small group that just throws everything into Excel 🤷🏻‍♂️
Then there's the small group that just throws everything into Excel
Interesting. Excel is certainly capable enough but I would think data set size limitations would be a frequent issue. Maybe not as frequent as I would have thought though.
R and tidyverse is really amazing, the syntax is so natural I rarely need to check the docs on anything to quickly do basic data transformation/plotting. Definitely more intuitive than pandas (and I learnt that first).
R is my go-to, since that's what my uni taught me (Utrecht university). But I've been learning pandas on python on the side for the versatility (and my CV).
Probably should have elaborated more in the original comment, but essentially I'm not a professional so the freedom of creating custom UI + having some standard variable structures like 2d and 3d transformations are worth it.
It also has a python-eqsue language, good build in ide, documentation, generic GPU access, and most importantly personally is extremely cross platform.
Mostly visualisations though, with rust doing the actual legwork
Not a data scientist, but an actuarie. I use python, pandas in jupyter notebooks (vs code). I think it would be cool to use polars, but my datasets are not that big to justify the move.
I learned SQL before pandas. It's still tabular data, but the mechanisms to mutate/modify/filter the data are different methodologies. It took a long time to get comfy with pandas. It wasnt until I understood that the way you interact with a database table and a dataframe are very different, that I started to finally get a grasp on pandas.
Its a paradigm shift from pandas. In polars, you define a pipeline, or a set of instructions, to perform on a dataframe, and only execute them all at once at the end of your transformation. In other words, its lazy. Pandas is eager, which every part of the transformation happens sequentially and in isolation. Polars also has an eager API, but you likely want to use the lazy API in a production script.
Because its lazy, Polars performs query optimization, like a database does with a SQL query. At the end of the day, if you're using polars for data engineering or in a pipeline, it'll likely work much faster and more memory efficient. Polars also executes operations in parallel, as well.
I only dabble, but I really like Julia. Has several language and architecture features I really like compared to python. Also looks like the libraries have been getting really good since last I used it much.
Anyone have any good pointers to DevOps resources or strategies? My data scientists keep stating that they need different approaches to ci/cd, but never seem to have actual requirements other than wanting to do things differently. I’d really like to offer them an easy way to get what they need while also complying with company policy and industry best practices, but it doesn’t seem to have any real differences
Not a data scientist but I deal with a lot of monitoring and security data. I mainly use Kusto/KQL. It's super easy to pick up and deceptively powerful. The biggest downside is you are pretty much tied into the M$ ecosystem. I haven't seen it gain any usage outside of Azure.