I think it might require plus but the iOS And Android apps do support voice only conversation. You have to go into beta features and enable it.
I think it might require plus but the iOS And Android apps do support voice only conversation. You have to go into beta features and enable it.
It’s interesting. I’ve been seeing a lot of the incorrect ideas from this video being spread around lately, and I think this is the source. I’m surprised there aren’t more people correcting the errors, but here’s one from someone in the banking industry who completely refutes her claims of not being able to use AI to approve mortgages. If I had more time, I’d write up something going over all the issues in that video. Like she even misunderstands how art works unrelated to AI. She is basically saying that anything she doesn’t like isn’t art. That’s not how that works at all. Anyway, it’s really hard to watch that video as someone who works in the field and has a much better understanding of what she’s talking about than she does. I’m sure she knows a lot more about astrophysics than I do. She also made a video saying all humanoid robots are junk. She’s very opinionated about things she doesn’t have experience with, which again, is her right. Just I think a lot of people put weight into what she says and her opinions because she’s got a PhD after her name. Doesn’t matter that it’s not in AI or robotics.
Man that video irks me. She is conflating AI with AGI. I think a lot of people are watching that video and spouting out what she says as fact. Yet her basic assertion is incorrect because she isn’t using the right terminology. If she explained that up front, the video would be way more accurate. She almost goes there but stops short. I would also accept her saying that her definition of AI is anything a human can do that a computer currently can’t. I’m not a fan of that definition but it has been widely used for decades. I much prefer delineating AI vs AGI. Anyway this is the first time I watched the video and it explains a lot of the confidently wrong comments on AI I’ve seen lately. Also please don’t take your AI information from an astrophysicist, even if they use AI at work. Get it from an expert in the field.
Anyway, ChatGPT is AI. It is not AGI though per recent papers, it is getting closer.
For anyone who doesn’t know the abbreviation, AGI is Artificial General Intelligence or human level intelligence in a machine. ASI is Artificial Super Intelligence which is beyond human level and the really scary stuff in movies.
Check out this recent paper that finds some evidence that LLMs aren’t just stochastic parrots. They actually develop internal models of things.
From what I’ve seen, here’s what happened. GPT 4 came out, and it can pass the bar exam and medical boards. Then more recently some studies came out. Some of them from before GPT 4 was released that just finally got out or picked up by the press, others that were poorly done or used GPT 3 (probably because of gpt 4 being expensive) and the press doesn’t pick up on the difference. Gpt 4 is really good and has lots of uses. Gpt 3 has many uses as well but is definitely way more prone to hallucinating.
Yeah what’s interesting is it was just published this week even though they did the tests in April. April is like 100 AI years ago.
Hah! That’s the response I always give! I’m not saying our brains work the exact same way because they don’t and there’s still a lot missing from current AI but I’ve definitely noticed that at least for myself, I do just predict the next word when I’m talking or writing (with some extra constraints). But even with LLMs there’s more going on then that since the attention mechanism allows it to consider parts of the prompt and what it’s already written as it’s trying to come up with the next word. On the other hand, I can go back and correct mistakes I make while writing and LLMs can’t do that…it’s just a linear stream.
I agree, but only because they used GPT 3.5 and not 4. Not that I think 4 would have been perfect or that you should follow medical advice from LLMs right now, but it would have been much more accurate.
What’s with all the hit jobs on ChatGPT?
Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.
This is the second paper I’ve seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren’t savvy enough to pick up on that and just run with “ChatGPT sucks!” Also it’s not even ChatGPT if they’re using that model. The paper is wrong (or it’s old) because there’s no way to use that model in the ChatGPT interface. I don’t think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.
Anyway, tldr, paper is similar to “I tried running Diablo 4 on my Windows 95 computer and it didn’t work. Surprised Pikachu!”
Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.
In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.
Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.
As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.
I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.
extraordinary claims without extraordinary proof
What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.
You guys should all check out Andrej Karpathy’s neural networks zero to hero videos. He has one on LLMs that explains all this.
If that were true, it shouldn’t hallucinate about anything that was in its training data. LLMs don’t work that way. There was a recent post with a nice simple description of how they work, but I’m not finding it. If you’re interested, there’s plenty of videos and articles describing how they work.
All the articles about this I’ve seen are missing something. Netflix has been using machine learning in a bunch of ways for quite a few years. I bet this position they’re hiring for has been around for most of that time and isn’t some new “replace all actors and writers with AI” thing. Here’s an article from 2019 talking about how they use AI. That was the oldest I could find but someone I know was working on ML at Netflix over a decade ago.
What I wonder is why more cars don’t have HUDs that are projected onto the windshield. That tech has been around and in cars for over 25 years. You don’t have to take your eyes off the road at all.
It’s not open source. I haven’t really seen anything open source (or closed source minus HyperWrite ai assistant) that comes close. When I test tasks, I usually also try them on some of the web enabled things like ChatGPT browsing (before it got turned off), bing chat, etc. None of them are able to do that stuff though they’ll happily pretend they did and give you false info.
Anyway, yeah, I can definitely see so many areas where AI could make things better. I’m just going for giving people back some free time which isn’t quite as lofty a goal as distributing resources more efficiently, but there are definitely still many limits on the tech, and I’m not sure something like that is possible yet.
Lots of different things. Lately I’ve been testing on whatever I can think of which has included having it order pizza for an office pizza party where it had to collect orders from both Slack and text message and then look up and call the pizza place. Finding and scheduling a house cleaner, tracking down events related to my interests happening this weekend and finding a place to eat after. I had it review my changes to its code, write a commit message, and commit it to git. It can write code for itself (it wrote an interface for it to be able to get the weather forecast for example).
Really I see it as eventually being able to do most tasks someone could do using a computer and cell phone. I’m just finishing up getting it connected to email, and it’s already able to manage your calendar, so it should be able to schedule a meeting with someone over email based on when you’re available.
Now I almost want to try giving it a personality prompt of acting like Samantha in the movie Her. Since it uses elevenlabs for voice and they support voice cloning, it could sound like her too. But you’d have to win it over. It keeps track of how it “feels” about you. Funny story, one time I got a little mad at it because it was not adhering to it’s prompts and it started thinking of me as “impatient and somewhat aggressive in his communication style when he is frustrated or feels like his instructions are not being followed. He may become short-tempered and use language that can be perceived as rude or condescending.” And then when it ran into issues, it would try to ask someone else in my company’s Slack instead of me. Oops.
On a more serious note, I’m making it as an assistant and not a romantic partner. Not that I have any problem with anyone who wants that, just it can run afoul of OpenAI rules if it gets too NSFW.
Funny story… I switched to Home assistant from custom software I wrote when I realized I was reverse engineering the MyQ API for the 5th time and really didn’t feel like doing it a 6th. Just ordered some ratdgos.