hedgehog
I should’ve tried that. I ordered an Apple Watch silicone band dupe (direct from the vendor’s site) for like $5. Same band would have been $50 from Apple, except the color I wanted wasn’t even available anymore.
Band came in the wrong size, so I wrote the vendor. They apologized, might have refunded me, and sent me a pack of like 10 different colored bands. Unfortunately those were all in the same size as the first one
Wow, what a terrible set of moves by whoever at AMD made that call. Lack of CUDA support is the only thing keeping me from buying AMD GPUs, and I’m pretty sure I’m not alone.
This product looks awful.
First, ever since they added ads to Google TV back in 2021 (even on the Nvidia Shield TV), it’s been a subpar experience. Well, it was for me, at least - maybe it’s improved, but I switched to Apple TV as a result and haven’t looked back.
Second, why would anyone get this over an Nvidia Shield TV or an Apple TV, other than ignorance or an incredibly strict budget? The Apple TV 4K is $130/$150 new and the Shield TV is $150 new. The Shield TV, which came out in 2017, is faster than this. The Apple TV 4K is 16x faster. And if you get either refurbished, get an older Apple TV,
For anyone on a strict budget, the $30-$50 Chromecasts make way more sense than this device. Yes, they’re ending production of those, but there are still competitors near that price point.
The only thing I can think of is that they’re banking on brand recognition or are hoping the segment of people without smart home hubs who are unaware of alternatives (like the $35 SmartThings Hub Dongle) and who aren’t in the Apple ecosystem is big enough.
Yes - you can set multiple daily limits (they reset at midnight and that can’t be changed), and each one can apply to one or more apps, categories, or websites. You can also select almost all the apps in a category and omit a couple, but then future apps in that category won’t be limited automatically. And you can choose specific apps to never be limited.
So you could set a 3 hour limit for Social apps, Games, a couple individually chosen other apps, and some specific websites, as well as a 5 minute limit toward the Facebook app and facebook.com, if you wanted.
If you mean the screen time tracking, then I don’t know think you can do that, but it gives you both your overall time as well as breakdowns by category (at least the top few categories), so you can do the math on your own.
It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.
If your objection is that AI “isn’t actually intelligent” then you’re just being pedantic and your objection has no substance. Replace “AI” with “systems that leverage machine learning solutions and that we don’t fully understand how they work” if you need to.
Did you watch the video? Do you have any familiarity with how AI technologies are being used today? At least one of those answers must be a no for you to have thought that the video’s message was incoherent.
Let me give you an example. As part of the ongoing conflict in Gaza, Israel has been using AI systems nicknamed “the Gospel” and “Lavender” to identify Hamas militants, associates, and the buildings that they operate from. Then, this information is rubber-stamped by a human analyst and then unguided missiles are sent to the identified location, often destroying entire buildings (filled with other people, generally the family of the target) to kill the identified target.
There are countless incidents of AI being used without sufficient oversight, often resulting in harm to someone - the general public, minorities, or even the business who put the AI in place.
The paperclip video is a cautionary tale against giving an AI system too much power or not enough oversight. That warning is relevant today, regardless of the precise architecture of the underlying system.
And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.
What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?
And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?
Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.
When I search for “misuse of AI” I get a ton of results from people talking about exactly that.
My guess is they thought they were 99% done but that the 1% (“just gotta deal with these edge case hallucinations”) ended up requiring a lot more work (maybe even an entirely new sub-system or a wholly different approach) than anticipated.
I know I suggested the issue might be hallucinations above, but what I’m genuinely curious about is how they plan to have acceptable performance without losing half or more of your usable RAM to the model.