It was merged after they where rightfully ridiculed by the community.

The awful response to the backlash by matwojo really takes the cake:

I’ve learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

75 points

Fuck AI code generation. I gave it a fair shot and it’s a waste of time that actively makes my job harder.

permalink
report
reply
29 points

My employer hired a consultancy company to implement some terraform and ansible code for some cloud infra and we got in return hundreds of machine generated code lines that do not work and now are suing the consultant.

Worst part, is that the guy who reviewed this ended up writing everything himself

permalink
report
parent
reply
16 points

I use AI as a rubber duck or to get general ideas all the time. For example, I wanted to make a “hand” of cards splayed out using css, so I asked AI and it gave me a nice starting point I could tweak without having to fuss with figuring the formula to tilt which element at which angle. It’s also quite good at guessing what boring boilerplate code I need to type next.

Another example: I was trying to figure out an architecture that adhered to OOP/SOLID principles for a specific task, and asked for an example implementation. I was able to test and think through a ton of permutations before landing on what I was taking to prod.

I think it’s a nice tool for the toolbelt, but it isn’t replacing a programmer anytime soon. You have to know what to ask and be able to intelligently analyze what it spits out to you

permalink
report
parent
reply
38 points

They excused it as “I’m overworked” which… fair enough, but that doesn’t excuse this. I blame Microsoft, not this fella.

permalink
report
reply
5 points
*

Later, they comment:

I’ve learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

Their takeaway from

  1. They reject with one of the reasons, and the only disclosed reason, being AI is worse at reading the form
  2. Community says AI readability should not be a priority over human readability
  3. That it may not even be a problem for AI to read
  4. Suggest at least considering an improvement in a form that both can read well

is that the community wants to “ensure human readability over any concerns in regard to AI”.

I don’t think this is only about MS or being overworked. Yes, it was a harsh push-back. But they’re responding passive aggressively, claiming the community pushes the other/an extreme when, to me very clearly, it does not.

Maybe you can say that conclusion is also due do being overworked and not investing the time to read through the comments. But I dunno. There’s no need to reply in that passive aggressive tone and claiming unreasonable things.

permalink
report
parent
reply
5 points

AI is good for the messy inbox problem, but it ain’t perfect.

permalink
report
parent
reply
35 points

The more apt headline is that Microsoft doesn’t pay enough people to review their documentation PRs. The org has 572 members but nearly every PR to this repo is processed by this one guy.

permalink
report
reply
34 points

He’s right that it’s probably harder for AI to understand. But wrong in every other way possible. Human understanding should trump AI, at least while they’re as unreliable as they currently are.

Maybe one day AI will know how to not bullshit, and everyone will use it, and then we’ll start writing documentation specifically for AI. But that’s a long way off.

permalink
report
reply
29 points

If it can’t understand human text then is it really worth using? Like isn’t that the minimum standard here, to get context and understand from text?

I don’t have any game in this but this seems backwards and stupid… Especially since all AI currently is fancy pattern matching basically

permalink
report
parent
reply
-5 points

It can understand, just not as well.

permalink
report
parent
reply
4 points
*

Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don’t use it for anything that’s deterministic and has a correct answer. So, in that rwgard, we’re basically at square 0.

You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can’t be something that’s also ML/random.

permalink
report
parent
reply
-5 points

You can’t confidently say that because nobody knows how to solve the bullshitting issue. It might end up being very similar to current LLMs.

permalink
report
parent
reply
2 points

He admitted himself that it might not be harder for AI to understand.

permalink
report
parent
reply
22 points

LLMs will replace software engineers any day now!

Also LLMs:

permalink
report
reply

Programming

!programming@programming.dev

Create post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



Community stats

  • 2.3K

    Monthly active users

  • 970

    Posts

  • 8.7K

    Comments