• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Hawk@lemmynsfw.comtomemes@lemmy.worldWarning
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    Yeah, that’s always a risk, but as you said, humans make mistakes too. And if you change your approach to software development by writing more tests and using strict interfaces or type annotations, etc., it is pretty reliable and definitely saves time.


  • Hawk@lemmynsfw.comtomemes@lemmy.worldWarning
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    They can also be really good for quickly writing code if you line up a whole bunch of tests and line up all the types and then copy and paste that a few times, maybe with a macro in Vim.

    The LLM will fill in the middle correctly, like 90% of the time. Compare it in git, make sure the tests pass, and then that’s an extra 20 minutes I get to spend with my wife and kids.


  • Hawk@lemmynsfw.comtomemes@lemmy.worldWarning
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    It would be on the order of aN intensive video game, maybe. Depends on the size of the model, etc.

    Training is definitely expensive but you are right in that it’s a one-time cost.

    Overall, the challenge is that it’s very inefficient. To use a machine learning algorithm to do something that could be implemented deductively is not ideal (On the other hand, if it saves human effort…)

    To a degree, trained models can also be retrained on newer data (eg freezing layers, LoRa, GaLore, Hypernetworks etc). Also newer data can be injected into a prompt to make sure that the responses are aligned with newer versions of software, for example.

    The electricity consumption is a concern, but it’s probably not going to be the end of the world.