The Israel-Palestine conflict

If you cannot get this, you should educate yourself I’m afraid. And what trap are you talking about… It’s pretty much straight fwd.

If you don't understand the difference between online users deliberately manipulating a search engine algorithm and a news story taking its time to show up on whatever aggregator or search engine you're using, I really don't know what to tell you.
 
If you don't understand the difference between online users deliberately manipulating a search engine algorithm and a news story taking its time to show up on whatever aggregator or search engine you're using, I really don't know what to tell you.

How the “Donald duck” for “president Donald” has anything to do with users manipulation and not algorithm by the search engine?
 
How the “Donald duck” for “president Donald” has anything to do with users manipulation and not algorithm by the search engine?

Actually, I was wrong about this. I went by the old Google manipulation thing people used to do 20 years ago (where they would make "miserable failure" result in the White House profile of George W. Bush). This is really just an example of Google's highly publicised shitty AI at work here, the same shitty AI that recommends people to put glue on pizza. There is nothing deliberate here, jusy incompetence.
 
There is nothing deliberate here, jusy incompetence.

Could be, but for a giant like Google it certainly looks suspicious. At least they fixed it now.

Back on topic: US Secretary of Defense: If Israel is attacked we will definitely help to defend itself. Another blanc check, instead of spanking Netanyahu's ass. It's hopeless.
 
Could be, but for a giant like Google it certainly looks suspicious. At least they fixed it now.

No, 5, it doesn't. Google's AI has been in the news in the last few months for being so pathetically bad that it became a meme:



 
But president duck is an AI issue or just simply a good ol' traditional algorithm thing?
 
Last edited:
No AI is far more complex. More extreme than comparing a piston with an engine.

An algorithm is a set of instructions — a preset, rigid, coded recipe that gets executed when it encounters a trigger. AI on the other hand — which is an extremely broad term covering a myriad of AI specializations and subsets — is a group of algorithms that can modify its algorithms and create new algorithms in response to learned inputs and data as opposed to relying solely on the inputs it was designed to recognize as triggers. This ability to change, adapt and grow based on new data, is described as “intelligence.”

 
But president duck is an AI issue or just simply a good ol' traditional algorithm thing?
The prompt suggestions have typically been based on what words would be most likely to follow what you already typed in other people’s executed searches. It wouldn’t surprise me at all if “Duck” was the most likely word to come next after “Donald” in a Google search, and once it starts auto-suggesting “Duck” you’re of course going to get a lot of people who accidentally search the autocompleted “President Donald Duck”, which then makes it even more likely to be recommended.

Also note that “Regan” was the #2 autocomplete. Misspelled, and being suggested after “Donald” rather than “Ronald”. Again, suggested based on the worst possible set of data, random users’ inputs. This stuff doesn’t surprise me much, especially if they’d had some recent accidental cache wipe and had been operating on a smaller set of inputs for a while.

No AI is far more complex.
In execution, but not in concept.

An algorithm is a procedure you follow that given certain inputs will produce a desired output.

A machine-learned algorithm is one where the machine is given input data and a set of test cases that are required to pass, and then the machine is given free rein to find any statistical associations in the input data that will allow it to produce results that pass all of the required tests, and then those associations will be used as the algorithm to process data moving forward. It basically outsources the creation of the algorithm to the machine, and it does so in such a way that you will never really know how it’s reaching its conclusions. You also have no good way to fix bugs other than retraining with additional input data and new test cases, which may produce an entirely different learned algorithm with its own quirks and bugs. Because of the black box nature of ML algorithms, they should never be used for anything involving serious risks or requiring fair treatment across different groups of people, because they are basically guaranteed to fail in some cases which you will never be able to predict or prevent. ML algorithms are typically used for things like self-driving cars, facial recognition, automated resume evaluation by HR, and they’re going to try to use it in military and judicial applications too. You know, all the areas where it should absolutely never be used as a final arbiter.

AI is a buzzword for machine-learned algorithms based on large language models and related ideas, where inputs are broken down into basic components and the algorithm attempts to regurgitate the most likely next set of components based on what you’ve already given it. Same idea as any other ML algorithm, but expanded in scope to many, many different vectors and trained on mountains of inputs and huge sets of tests. With text it’s kind of obvious what happened — they hoovered up the internet as their input data and then used some of those same inputs as output test examples to try to generate natural-sounding responses. This was then expanded to other things like audio, pictures, video, etc. It has all the same caveats as any other ML-based algorithm, but the scope of it guarantees a wider spread of bugs (rebranded “hallucinations”) and the impossibility of putting sufficient guard rails around them. All you can do is filter the inputs and outputs to try to censor undesirable content, but the rest is a ridiculously complicated black box.

AGI (artificial general intelligence) is about taking those concepts to the next step, where a model can basically function independently and create its own models as needed, and be able to respond in such a way that it could consistently pass for a conscious human being. Still a black box, still riddled with unfixable bugs, still uncontrollable in the end, but everyone’s racing to get there. What could go wrong? :rolleyes:
 
The prompt suggestions have typically been based on what words would be most likely to follow what you already typed in other people’s executed searches. It wouldn’t surprise me at all if “Duck” was the most likely word to come next after “Donald” in a Google search, and once it starts auto-suggesting “Duck” you’re of course going to get a lot of people who accidentally search the autocompleted “President Donald Duck”, which then makes it even more likely to be recommended.

Also note that “Regan” was the #2 autocomplete. Misspelled, and being suggested after “Donald” rather than “Ronald”. Again, suggested based on the worst possible set of data, random users’ inputs. This stuff doesn’t surprise me much, especially if they’d had some recent accidental cache wipe and had been operating on a smaller set of inputs for a while.

There is no excuse Jer!! :lol:

In execution, but not in concept.

Ok if you go many layers deep the basis is the algorithm, like atoms and DNA are common between humans and bacteria. But frankly the revolutionary idea of a dynamic adoptable set of algorithms makes it a different beast in my eyes.

I hear you and you put it quite nicely. The base is the same, but the more I think about it, I find closer relation between a simple machine, i.e. a lever and an internal combustion engine than between algorithm and AI.
 
Ok if you go many layers deep the basis is the algorithm, like atoms and DNA are common between humans and bacteria. But frankly the revolutionary idea of a dynamic adoptable set of algorithms makes it a different beast in my eyes.
Well, to a software engineer it’s all the same shit, just grander in scope.
 
Back on topic: US Secretary of Defense: If Israel is attacked we will definitely help to defend itself. Another blanc check, instead of spanking Netanyahu's ass. It's hopeless.

Also to point out that Biden Administration was promoting or pretended to promote (but still) the negotiations between Israel and Hamas. Think about that. And then one side assassinates the lead negotiator of the other side.
This is a huge FU to Biden Administration by Netanyahu and still they are so spineless that the only thing they manage to say is: We will defend you no matter what.
Add to that the 50+ standing ovations a week prior the assassination. And Netanyahu meeting with Biden after that speech. Whatever he disclosed his intentions or not, this is beyond ugly.
 
No, that hamas leader just reaped what he sow.
Lol. Israel have slaughtered more innocent people in the past 10 months than Hamas have in their entire existence. And let's not forget who actually funded and supported Hamas this whole time...

But regardless, Israel has no right to indiscriminately bomb its sovereign neighbours with impunity. What they did was a desperate last move because they know full well that President Harris won't tolerate anymore of their shit.

Netanyahu's days are numbered. Send that bastard to the Hague and free Palestine.
 
Netanyahu is a dud but Israel did the right thing. Like KK Priest said If you sow the wind, you will reap the whirlwind. No mercy for hamas scumbags.
 
Watched this Monday, only posting it now, because it barely got clipped to just have the relevant section regarding the West Bank.

 
Watched this Monday, only posting it now, because it barely got clipped to just have the relevant section regarding the West Bank.

Not available in the UK but I'm assuming it's the West Bank episode? They show that on Sky so they block the YouTube episodes sadly.
 
Back
Top