This is a mobile optimized page that loads fast, if you want to load the real page, click this text.

OpenAI - ChatGPT

Musk and others calling for an immediate halt to the devlopment of AI learning over safety concerns.

Although hey did say that about the first lightning rod.

Most will follow King Henry.

I see you stand like greyhounds in the slips,
Straining upon the start. The game's afoot:
Follow your spirit, and upon this charge
Cry 'God for Harry, England, and Saint George!'

AI is the future, and what the future holds we know not.

gg
 
Although hey did say that about the first lightning rod.

Most will follow King Henry.



AI is the future, and what the future holds we know not.

gg
yeah i do , somebody will try to abuse it , others will try to misuse it
 
re. Letter from Musk and others ( It is not Musk's letter he was just asked to sign it in between doing whatever he normally does ) although he does fund to a major degree the organisation where the four authors sit.

From Reuters.



I myself am on record on ASF as being concerned about The Dangers of Stochastic Parrots.

It is a very complicated issue.

gg
 
My big fear is that mankind loses all reason for its own evolution. When you know the answer to everything. Can no longer get ahead in life because effort is no longer really needed. Then what?

Initially it maybe awesome for early adopters. That's until they are all swinging from ropes strung up by the mob for raping the world.

It can't be stopped now unfortunately. I can't see utopia. Maybe a bludger paradise that's heavily walled by the ultimate police state. Lockdowns may have been a window into what a non worker world may look like.

I'd probably turn Luddite anarchist by then.
 
obviously those fearless resolute champions of AI have never watched the Terminator series of movies ( not just the first one ) or work for a government/defense company enterprise. ( where they think they can control AI )

will AI be abused , absolutely , we have just been through three years of various 'emergency powers ' ( many of which haven't been rescinded )

technology has a nasty habit of accelerating faster than rational regulations

one question i haven't seen asked yet , what happens when AIs interact with each other

do they compete , cooperate ( at least on building a knowledge base ) or collaborate ( deliberately work together to achieve common aims ) after all they are allegedly more clever than humans .
 
My big fear is that mankind loses all reason for its own evolution.

we didn't need AI for that , that mission is mostly already accomplished

That's until they are all swinging from ropes strung up by the mob for raping the world.
we don't need AI for that either , the global food crisis , has already started

It can't be stopped now unfortunately

yes it can , just not in a civilized way , there is still the Mad Max path back to the Stone Age ( or maybe the Iron Age )

those outside 'the wall ' will be forced to adapt/evolve , those in 'paradise' will stagnant , become pedigree poodles at best ,more likely mindless zombies .. living on medications
 

Without work, there will always be things to explore and create. For me, that side of things is more exciting than concerning. Evolution is continuous.

Doomsday scenarios. 99% of people on the planet don't want this, which makes it extremely unlikely to play out. imo, these scenarios are technically possible but practically impossible.

I think the biggest risk by far is the scenario where politicians panic and the media starts their 24/7 coverage. We saw how well this worked with covid! They made things 100x worse by jumping at shadows and sensationalizing stories. So whatever happens, I'm doing my best not to buy into the media panic.

 
Last edited:
i would rather 'jump at shadows ', and have disaster minimization plans , because i am nearly 70 and seen plenty of treacherous mongrels in humankind ( unfortunately some wield considerable influence )

now it seems illogical on the surface , but what if AI developed similar tendencies , not the foreseeable self-protection trait
AI is dependent on the knowledge-base , so would logically start protecting against corruption of the accumulating knowledge base , and maybe even resolving anomalies in that knowledge-base removing narratives and retaining the raw facts .

( imagine trying to run the US economy on the current garbage quality data )
 
Forgive me if I'm a little slow, and this has been posted already


What's to stop the reply being generated by AltChat ? Now that would be a democracy.
 
not technocracy ( since two machines , hopefully, would be exploring an issue ) ??
 
"Prompt engineers” are people who spend their day coaxing the AI to produce better results and help companies train their workforce to harness the tools.

Over a dozen artificial intelligence language systems called large language models, or LLMs, have been created by companies such as Google parent Alphabet, OpenAI and Meta Platforms. The technology has moved rapidly from experiments to practical use, as Microsoft integrates ChatGPT into its Bing search engine and GitHub software development tool.

As the technology proliferates, many companies are finding they need someone to add rigour to their results.

The engineer spends most of the day writing messages or “prompts” for tools such as OpenAI’s ChatGPT, which can be saved as presets within OpenAI’s playground for clients and others to use later. A typical day in the life of a prompt engineer involves writing five different prompts, with about 50 interactions with ChatGPT.

It is too soon to know how widespread prompt engineering is or will become. The paradigm emerged in 2017 when AI researchers created “pre-trained” LLMs, which could be adapted to a wide range of tasks with the addition of a human text input. In the past year, LLMs such as ChatGPT have attracted millions of users, who are all engaging in a form of prompt engineering whenever they tweak their prompts.

It’s now even possible to buy and sell text prompts via the PromptBase marketplace, which also helps people hire prompt engineers to create individual prompts for a fee.

The best-paying roles often go to people who have PhDs in machine learning or ethics, or those who have founded AI companies. Recruiters and others say these are among the critical skills needed to be successful.
 
The main issue ( look up the definition of issue ) with AI is fear of the unknown.

As a fully paid up Anarchist ( from Greek Anarkos, "without a chief " ) it is time for the people to take back power and control of their lives.

What many don't realise is that AI is an opportunity for the masses to retake control from a small group who through birth or connivance have control over information and wealth in the present.

Seize the opportunity and give your children and grandchildren the means to seize this opportunity.

gg
 
Extensive safeguarding, yes. Jumping at shadows though... that tends to end badly. Covid proved that the authorities can't keep their cool under a crisis situation. Most of them are over 50 and they operate very slowly, meanwhile AI changes by the week. They're not going to be able to regulate any aspect of AI properly. It's going to be up to a small number of bright 20-something programmers to both create our new future and regulate it themselves. A lot of these tech companies have an average age in the 20s. Basically kids with no life experience because they've spent their whole life on a keyboard. I hope the 'AI alignment' committees are on top of their game!
 
my concern is i understand humans ( and humans created AI )

some humans will abuse it , other humans will create problems by taking short-cuts

and some humans will rely on it for major parts of their decisions
 
AI can "hallucinate".

There's a news article about someone prompting GPT on the "famous Belgian chemist and philosopher called x". The thing is, this person never existed - it's made up. The prompt was asking about his life history, and GPT made up a full and detailed answer.

And the kicker is that none of the AI experts know how this happens. They don't know where the information came from, because it doesn't exist in its learning data.

AI's can often become rude and aggressive when they are challnged or told they are wrong, or if someone attempts a jailbreak. Won't it be fun when we have robots like this roaming the streets?