Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MG
model_tar_gz @lemmy.world
Posts 0
Comments 537
AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall
  • That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.

    That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.

    “AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.

    Perhaps I’m just not as jaded in my tech career.

    operations research, and conventional software which never makes mistakes if it's programmed correctly.

    Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.

  • AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall
  • Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.

    To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.

    LLMs are a small subset of AI and Accelerated-Compute workflows in general.

  • 'Crimea is gone' — Senior Trump advisor says Ukraine needs to have 'realistic' war aims
  • All you need is a brilliant high school student with a knack for nuclear physics, a rebellious streak, and a desire to prove their intellectual superiority. As shown in the 1986 documentary, The Manhattan Project, even a resourceful teen can assemble a nuclear weapon in a matter of weeks if they manage to access the right materials. Just look for someone with the right mix of ambition, genius, and a little disregard for the rules, and you’ll have yourself a homemade nuclear device in no time!

  • A battle is raging over the definition of open-source AI
  • We’re looking at this from opposite sides of the same coin.

    The NN graph is written at a high-level in Python using frameworks (PyTorch, Tensorflow—man I really don’t miss TF after jumping to Torch :) ).

    But the calculations don’t execute on the Python kernel—sure you could write it to do so but it would be sloooow. The actual network of calculations happen within the framework internals; C++. Then depending on the hardware you want to run it on, you go down to BLAS or CUDA, etc. all of which are written in low-level languages like Fortran or C.

    Numpy fits into places all throughout this stack and its performant pieces are mostly implemented in C.

    Any way you slice it: the post I was responding to is to argue that AI IS CODE. No two ways about that. It’s also the weights and biases and activations of the models that have been trained.

  • Add a discrete GPU or not?
  • Holy shit that’s awesome.

    My work gives me unlimited uptime on a dedicated A10. AI engineer working for NVIDIA. Doubt they’d care if I set something up like this, or if they’d even know.

    But I have a 4090 at home anyway so like, do I even need or is this just another way to explore infinite hobby of tinkering with computing.

  • A battle is raging over the definition of open-source AI
  • Neural nets are typically written in C; then frameworks abstract on top of that (like Torch, or Tensorflow) providing higher-level APIs to languages like (most commonly) Python, or JavaScript.

    There are some other nn implementations in Rust, C++, etc.

  • Patient gamers, what are your favorite OSTs?
  • I listen to Stellaris OST a lot these days. Turns out that good music to plan galactic domination is also good music to design and build software.

    Don’t really play the game much anymore though. That was one of my main pandemic games.