153 points

Plants, maybe. Fungi, hell no.

AI + fungi = you die

permalink
report
reply
24 points

Actually I use it as a starting point for fungi. Seek will usually get me to the genus, and from there I can cross reference various books to narrow it down. Hell, sometimes it’ll give me an exact match, and then I just have to perform a yes or no ID with my field guides. That being said, I mostly end up with no, I’m shit scared of all amanitas and most mushrooms just aren’t tasty enough to warrant the effort.

permalink
report
parent
reply
9 points

I have heard that spore prints are a reliable way of determining mushroom species (removing the stem, putting the underside of the mushroom on an ink pad, pressing against paper, and comparing the print with those of known species).

I bet an AI could analyze that data pretty well. But since there’s really no market for such a product, if I want it, I would have to make it myself. In which case I highly advise against using it because I really don’t trust me.

permalink
report
parent
reply
15 points
permalink
report
parent
reply
5 points

Ah, thanks for the correction, never done it myself and learned about it a long time ago so I’m not surprised I remembered it wrong.

permalink
report
parent
reply
5 points

Fungi is literally fun tho. Mostly to see how wrong that fucker is. It’s just as wrong as me

permalink
report
parent
reply
1 point

That’s essentially what the Flood is in Halo

permalink
report
parent
reply
111 points

I don’t actually know if it’s considered a deepfake when it’s just a voice; but I’ve been using the hell out of Speechify, which basically deepfakes voices and pairs them with a text input.

…so… nursing school, we have an absolute fuck-ton of reading assignments. Staring at a page of text makes my brain melt, but thankfully nowadays everything’s digital, so I can copy entire chapters at a time, and paste them into Speechify. Now suddenly I have Snoop-dogg giving me a lecture on how to manage a patient as they’re coming out of general anesthesia. Gets me through the reading fucking fast, and it retains so, SO much better than just trying to cram a bunch of flavorless text.

permalink
report
reply
54 points

Speechify also pays the people who’s voices they’re using rather than taking them from publicly available videos and recordings without permission.

permalink
report
parent
reply
7 points

That’s also the business model behind ad localization now, they’ll pay the actor once for appearing on set and then pay them royalties to keep AI editing the commercial to feature different products in different countries.

permalink
report
parent
reply
15 points

If they’re up front about it and if the actor agrees to it (as with Speechify), I don’t see a problem with that. SAG should also be involved to try and determine fair compensation.

permalink
report
parent
reply
31 points

Wait that’s genius. I would listen to Snoop Dogg teaching me particle physics any day of the week.

permalink
report
parent
reply
17 points

I think the key here is you’re using it for yourself only.

permalink
report
parent
reply
21 points

I think it comes down more to understanding what the tech is potentially good at, and executing it in an ethical way. My personal use is one thing; but Speechify made an entire business out of it, and people aren’t calling for them to be burned to the ground.

As opposed to Google’s take of “OMG AI! RUB IT INTO EVERYONE’S NOSE, THEY’RE GONNA LOVE IT!” and just slapping it onto the internet, and then pretending to be surprised when people ask for a pizza recipe and it tells them to add Elmer’s Glue to it…

Two controlled inputs giving a predictable output; vs just letting it browse 4chan and see what happens. The tech industry definitely seems to lean toward the later, which is fucking tragic, but there are gems scattered throughout the otherwise pure pile of shit that LLMs are at the moment.

permalink
report
parent
reply
0 points

In my opinion using someone’s voice without their consent in a public way is unethical, but you doing it in private doesn’t hurt anyone.

permalink
report
parent
reply
102 points

Do not use ai for plant identification if it actually matters what the plant is.

Just so ppl see this:

DO NOT EVER USE AI FOR PLANT IDENTIFICATION IN CASES WHERE THERE ARE CONSEQUENCES TO FAILURE.

For walking along and seeing what something is, that’s fine. No big deal if it tells you something’s a turkey oak when it’s actually a pin oak.

If you’re gonna eat it or think it might be toxic or poisonous to you, if you want to find out what your pet or livestock ate, if you in any way could suffer consequences from misidentification: do not rely on ai.

permalink
report
reply
28 points

You could say the same about a plant identification book.

It’s not so much that AI for plant identification is bad, it’s that the higher the stakes, the more confident you need to be. Personally, I’m not going foraging for mushrooms with either an AI-based plant app or a book. Destroying Angel mushrooms look pretty similar to common edible mushrooms, and the key differences can disappear depending on the circumstances. If you accidentally eat a destroying angel mushroom, the symptoms might not appear for 5 to 24 hours, and by then it’s too late. Your liver and kidney are already destroyed.

But, I think you could design an app to be at least as good as a book. I don’t know if normal apps do this, but if I made a plant identification app, I’d have the app identify the plant, and then provide a checklist for the user to use to confirm it for themselves. If you did that, it would be just like having a friend just suggest checking out a certain page in a plant identification book.

permalink
report
parent
reply
20 points

The problem with AI is that it’s garbage in, garbage out. There’s some AI generated books on Amazon now for mushroom identification and they contain some pretty serious errors. If you find a book written by an actual mycologist that has been well curated and referenced, that’s going to be an actually reliable resource.

permalink
report
parent
reply
5 points

Are you assuming that AI in this case is some form of generative AI? I would not ask chatgpt if a mushroom is poisonous. But I would consider using a convolutional neural net based plant identification software. At that point you are depending on the quality of the training data set for the CNN and the rigor put into validating the trained model, which is at least somewhat comparable to depending on a plant identification book to be sufficiently accurate/thorough, vs depending on the accuracy of a story that genAI makes up based on reddit threads, which is a much less advisable venture

permalink
report
parent
reply
9 points

If you’re using the book correctly, you couldn’t say the same thing. Using a flora book to identify a plant requires learning about morphology and by having that alone you’re already significantly closer to accurately identifying most things. If a dichotomous key tells you that the terminating leaflet is sessile vs. not sessile, and you’re actually looking at that on the physical plant, your quality of observation is so much better than just photographing a plant and throwing it up on inaturalist

permalink
report
parent
reply
6 points

Not to mention, the book is probably going to list look-alike plants, and mention if they are toxic. AI is just going to go “It’s this thing”.

permalink
report
parent
reply
3 points

You can easily say the same thing. Use the image identification to get a name of the plant and google it to read about checking if the sessile is leafy or no.

permalink
report
parent
reply
4 points

The difference between a reference guide intended for plant identification written and edited by experts in the field for the purposes of helping a person understand the plants around them and the ai is that one is expressly and intentionally created with its goal in mind and at multiple points had knowledgeable skilled people looking over its answer and the other is complex mad libs.

I get that it’s bad to gamble with your life when the stakes are high, but we’re talking about the difference between putting it on red and putting it on 36.

One has a much, much higher potential for catastrophe.

permalink
report
parent
reply
14 points

sorry couldn’t hear you over the CRUNCHING OF MY MEAL

permalink
report
parent
reply
9 points

I have a feeling I know where your username came from.

permalink
report
parent
reply
3 points

Forgo identification and eat the plant based on vibes like our ancestors.

permalink
report
parent
reply
0 points

Like I get what you’re saying but this is also hysterical to the point that people are going to ignore you.

Don’t use AI ever if there are consequences? Like I can’t use an AI image search to get rough ideas of what the plant might be as a jumping off point into more thorough research? Don’t rely solely on AI, sure, but it can be part of the process.

permalink
report
parent
reply
65 points

The blanket term “AI” has set us back quite a lot I think.

The plant thing and the deepfakes/search engines/chatbots are two entirely different types of machine learning algorithm. One focussed on distinguishing between things, the other focussed on generating stuff.

But “AI” is the marketable term, and the only one most people know. And so here we are.

permalink
report
reply
22 points

I hate when streamers/gamers/etc refer to procedural generation as “ai generated”. It’s infuriating.

permalink
report
parent
reply
15 points

I particularly “Love” that a bunch of like, procedural generation and search things that have existed for years are now calling themselves “AI” (without having changed in any way) because marketing.

permalink
report
parent
reply
13 points

Reminds me of how everything on a computer used to be a “program”, but now they’re all just “apps”

permalink
report
parent
reply
4 points
*

I read a story on CBC the other day that was all about how an AI voice was taking over from hosts on off-hours at some local radio station, then deeper in the article it revealed that everything the “AI” reads was written by a human. So it was about someone using text-to-speech technology that has been around since at least the 70s the whole time. Hardly newsworthy in any way except for “IT’S AI!”

permalink
report
parent
reply
6 points

Oh man this one drives me up the wall too.

Someone literally with a straight face said how cool Minecraft has AI generated worlds and I wanted to flip a table.

permalink
report
parent
reply
10 points

You’re talking about types of machine learning algorithms. Is that a more precise term that should be used here instead of AI? And would the meme work better if it wss used. I’m asking, because I really don’t understand these things.

permalink
report
parent
reply
2 points

There are proper words for them, but they are ~technical jargon~. It is sufficient to know that they are different types of algorithm, only really similar in that both use machine learning.

And would the meme work better if it wss used

No because it is a meme, and if people had learned the proper words for things, we wouldn’t need a meme at all.

permalink
report
parent
reply
2 points

Both use machine learning algorithms that are modelled off the behaviour of neurons.

They are still different algorithms but they’re not that wildly different in the grand scale of the field of machine learning.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
0 points

The stuff people don’t like is generative AI

permalink
report
parent
reply
7 points

I suppose both plantnet and deep fakes have conv networks as part of their architectures though

permalink
report
parent
reply
2 points

Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).

permalink
report
parent
reply
3 points

It’s particularly annoying because those are all AI. AI is the blanket term for the entire category of systems that are man made and exhibit some aspect of intelligence.

So the marketing term isn’t wrong, but referring to everything by it’s most general category is error prone and makes people who know or work with the differences particularly frustrated.
It’s easier to say “I made a little AI that learned how I like my tea”, but then people think of something that writes full sentences and tells me to put dogs in my tea. “I made a little machine learning based optimization engine that learned how I like my tea” conveys it much less well.

permalink
report
parent
reply
2 points

AI is the new flavor, just like 2.0, SIM-everything, VIRTUAL-everything, CYBER -everything, were before. Eventually good use cases will emerge, and the junk will be replaced by the next buzzword.

permalink
report
parent
reply
0 points

Good use cases for AI already exist

And I’m saying this as a certified hater of GenAI

Machine Learning as an invention has already been used for good, useful things. It’s just that it never got caught up in hype like the modern wave of Generative Transformers (which is apparently the proper term for those overhyped chatbots and picture generators)

permalink
report
parent
reply
50 points

The difference is that plant identification is a classification problem, not an LLM.

permalink
report
reply
54 points

Not all of AI is LLMs, most aren’t.

permalink
report
parent
reply
8 points

I think state machines are cool and groovy. I still don’t understand genetic algorithms but I wish I did.

15 years ago we were all saying “AI is just a series of IF statements” because of expert systems and y’all forgot

permalink
report
parent
reply
2 points

Genetic algorithms kinda suck as they use random variations and breeding to solve a problem which is much slower than using backpropagation with any decent reward modeler. It’s the difference between selective breeding and gene splicing in the real world.

permalink
report
parent
reply
15 points

The most annoying thing since the rise of LLMs is that everyone thinks that all of AI is just LLMs

Classification machine learning models can also be neural networks, which is something that was called AI also

permalink
report
parent
reply
1 point

Yeah I used to think that reinforcement learning would be trending. But hey, maybe next time.

permalink
report
parent
reply
9 points

AI isn’t just about LLM. Modern AI libraries (pytorch, tensorflow etc.) can be used for being trained with all sorts of data.

permalink
report
parent
reply
4 points

Some customer support “bots” could be considered classification problems, no? At least in so far as which department does a call get routed to.

permalink
report
parent
reply
3 points

At least it’s routing you to a department instead of trying to help you solve the issue yourself by showing you different help pages you already looked at before trying to contact support.

permalink
report
parent
reply
3 points

If you actually looked at the help pages before contacting support, you are in the minority.

permalink
report
parent
reply
2 points

Could be. Classification is a type of problem. LLM is a type of model. You can use LLMs to solve classification problems. There’s a good chance that’s what’s happening here.

permalink
report
parent
reply
1 point

I would guess it is a conv neural network which is probably similar to what is being used in any image/video related AI such as deep fakes

permalink
report
parent
reply

memes

!memes@lemmy.world

Create post

Community rules

1. Be civil

No trolling, bigotry or other insulting / annoying behaviour

2. No politics

This is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent reposts

Check for reposts when posting a meme, you can only repost after 1 month

4. No bots

No bots without the express approval of the mods or the admins

5. No Spam/Ads

No advertisements or spam. This is an instance rule and the only way to live.

Sister communities

Community stats

  • 12K

    Monthly active users

  • 2.7K

    Posts

  • 53K

    Comments