I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.

Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).

  • @[email protected]
    link
    fedilink
    English
    24 months ago

    I think calling that vibe coding is a very unfitting term. I haven’t seen it called that before.

  • Gamma
    link
    fedilink
    English
    324 months ago

    They can vibe as much as they want, but don’t ever ask me to touch the mess they create.

    • @[email protected]
      link
      fedilink
      74 months ago

      Once companies recognize the full extent of their technical debt, they will likely need to hire a substantial number of highly experienced software engineers to address the issues, many of which stem from over-reliance on copying and pasting outputs from large language models.

      • Gamma
        link
        fedilink
        English
        44 months ago

        A new post-LLM coding opportunity: turd polishing

    • @[email protected]
      link
      fedilink
      74 months ago

      Yup, sure, but this is basically a “no true scotsman” argument, which isn’t at all what the “AI” hype is about.

      Put yourself in the shoes of some naive corporate exec. You want the software to get made, but you don’t want to pay for it. To you, people (especially experts like programmers) are an expense. You’d very much like to skip that pesky part and go straight from an idea to the product. This is what the “AI” hype is largely about.

      “AI” companies are trying to set up a narrative, in which programmers can be replaced with LLMs. Execs don’t care whether you’re coding or not - they care about expenses and profits, and they know a team of programmers is more expensive than an OpenAI subscription.

      • @[email protected]
        link
        fedilink
        54 months ago

        I don’t want to be put in the shoes of a greedy corporate exec but i can put myself in the shoes of a non developer wanting to create an app for his own need, so i understand why some people may need AI for that. I’m ok with that but that is not coding

  • @[email protected]
    link
    fedilink
    54 months ago

    I probably wouldn’t do it. I do have AI help at times, but it is more for bouncing ideas off of, and occasionally it’ll mention a library or tech stack I haven’t heard of that allegedly accomplishes what I’m looking to do. Then I go research the library or tech stack and determine if there is value.

  • @[email protected]
    link
    fedilink
    54 months ago

    Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.

    As soon as you get anywhere that isn’t all over the internet it starts to bullshit.

    But if you’re working in a domain it’s decent at, why not? I found in those cases fixing the AI’s mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go “no I obviously don’t want it like that”…

  • @[email protected]
    link
    fedilink
    English
    134 months ago

    This seems like a game you’d do with other programmers, lol.

    I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.

    The whole coding process? No. 😭

    • @[email protected]
      link
      fedilink
      14 months ago

      You can save time at the cost of headaches, or you can save headaches at the cost of time. You cannot save both time and headaches, you can at most defer the time and the headaches until the next time you have to touch the code, but the time doubles and the headaches triple.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        AI can type tedious snippets faster than me, but I can just read the code and revise it if needed.

  • @[email protected]
    link
    fedilink
    English
    174 months ago

    Nearly every time I ask ChatGPT a question about a well established tech stack, it’s responses are erroneous to the point of being useless. It frequently provides examples using fabricated, non-existent functionality and the code samples are awful.

    What’s the point in getting AI to write code that I’m just going to have to completely rewrite?

    • @[email protected]
      link
      fedilink
      64 months ago

      There’s one valid use-case for LLMs: when you have writer’s block, it can help to have something resembling an end product instead of a blank page. Sadly, this doesn’t really work for programming, because incorrect code is simply worse than no code at all. Every line of code is a potential bug and every line of incorrect code is a guaranteed bug.

      I use an LLM with great success to write bad fanfiction though.

  • TehPers
    link
    fedilink
    English
    54 months ago

    For personal projects, I don’t really care what you do. If someone who doesn’t know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn’t really bother me. Just don’t ask me to look at the code, and definitely don’t ask me to use the tool.

  • @[email protected]
    link
    fedilink
    64 months ago

    We should let these twits enjoy their shit on twitter. The AI hype is just like the crypto hype, it’ll fade.

    The name vibe coding sounds like a drunk evening with friends getting an MVP off the ground, but nothing more.

    Anti Commercial-AI license

  • @[email protected]
    link
    fedilink
    English
    54 months ago

    As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.

    It got the interface right, but it got some details really wrong:

    • It stored way more information than it needed in the structure tracking state, some of it very sensitive.
    • It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
    • Instead of using type safe enums it defined integer constants for no good reason.
    • It logged information about failures as variable length strings, creating a possible timing side channel attack.
    • Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
    • And it “helpfully” used wrapping_add to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages). wrapping_add explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.
    • It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).

    To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).

    But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.

  • @[email protected]
    link
    fedilink
    2
    edit-2
    4 months ago

    IMO it will “succeed” in the early phase. Pre-seed startups will be able demo and get investors more easily, which I hear is already happening.

    However, it’s not sustainable, and either somebody figures out a practical transition/rewrite strategy as they try to go to market, or the startup dies while trying to scale up.

    We’ll see a lower success rate from these companies, in a bit of an I-told-you-so-moment, which reduces over-investment in the practice. Under a new equilibrium, vibe coding remains useful for super early demos, hackathons, and throwaway explorations, and people learn to do the transition/rewrite either earlier or not at all for core systems, depending on the resources founders have available at such an early stage.