But president duck is an AI issue or just simply a good ol' traditional algorithm thing?
The prompt suggestions have typically been based on what words would be most likely to follow what you already typed in
other people’s executed searches. It wouldn’t surprise me at all if “Duck” was the most likely word to come next after “Donald” in a Google search, and once it starts auto-suggesting “Duck” you’re of course going to get a lot of people who accidentally search the autocompleted “President Donald Duck”, which then makes it even more likely to be recommended.
Also note that “Regan” was the #2 autocomplete. Misspelled, and being suggested after “Donald” rather than “Ronald”. Again, suggested based on the worst possible set of data, random users’ inputs. This stuff doesn’t surprise me much, especially if they’d had some recent accidental cache wipe and had been operating on a smaller set of inputs for a while.
No AI is far more complex.
In execution, but not in concept.
An algorithm is a procedure you follow that given certain inputs will produce a desired output.
A machine-learned algorithm is one where the machine is given input data and a set of test cases that are required to pass, and then the machine is given free rein to find
any statistical associations in the input data that will allow it to produce results that pass all of the required tests, and then those associations will be used as the algorithm to process data moving forward. It basically outsources the creation of the algorithm to the machine, and it does so in such a way that you will never really know
how it’s reaching its conclusions. You also have no good way to fix bugs other than retraining with additional input data and new test cases, which may produce an entirely different learned algorithm with its own quirks and bugs. Because of the black box nature of ML algorithms, they should never be used for anything involving serious risks or requiring fair treatment across different groups of people, because they are basically guaranteed to fail in some cases which you will never be able to predict or prevent. ML algorithms are typically used for things like self-driving cars, facial recognition, automated resume evaluation by HR, and they’re going to try to use it in military and judicial applications too. You know, all the areas where it should absolutely
never be used as a final arbiter.
AI is a buzzword for machine-learned algorithms based on large language models and related ideas, where inputs are broken down into basic components and the algorithm attempts to regurgitate the most likely next set of components based on what you’ve already given it. Same idea as any other ML algorithm, but expanded in scope to many, many different vectors and trained on mountains of inputs and huge sets of tests. With text it’s kind of obvious what happened — they hoovered up the internet as their input data and then used some of those same inputs as output test examples to try to generate natural-sounding responses. This was then expanded to other things like audio, pictures, video, etc. It has all the same caveats as any other ML-based algorithm, but the scope of it guarantees a wider spread of bugs (rebranded “hallucinations”) and the impossibility of putting sufficient guard rails around them. All you can do is filter the inputs and outputs to try to censor undesirable content, but the rest is a ridiculously complicated black box.
AGI (artificial general intelligence) is about taking those concepts to the next step, where a model can basically function independently and create its own models as needed, and be able to respond in such a way that it could consistently pass for a conscious human being. Still a black box, still riddled with unfixable bugs, still uncontrollable in the end, but everyone’s racing to get there. What could go wrong?