Australian (ASX) Stock Market Forum

OpenAI - ChatGPT

I have used most if not all the available AI products out there. I found ChatGPT very intimidating, it was the first one I'd used. Since then I've found all the others to be somewhat wishy washy in their answers.

My ability to frame the question and scenario properly ( there is a word for it i'll check later ? prompt ) i.e. ask the right question in the right way is essential. I now find Chat GPT is the best. More answers where appropriate and less the same. Better answers. Less bull****.

That is my experience anyway.

gg
I agree. I an answer is vague, go can drill down and get to specifics more easily.

It's a good platform but I wonder how much longer it will be free.
 
smooth sailing ahead??
ChatGPT maker OpenAI is reportedly in talks to raise $US40 billion in fresh capital at a valuation of $US340 billion, up from $US157 billion last October; the company is yet to make anything even resembling a profit.
 
We "know" this is happening. But will anyone take effective action ? And what could be the consequences of harnessing some high level criminal intelligence for scammers and criminals /

Most AI chatbots easily tricked into giving dangerous responses, study finds

Researchers say threat from ‘jailbroken’ chatbots trained to churn out illegal information is ‘tangible and concerning’


Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say.

... Despite efforts to strip harmful text from the training data, LLMs can still absorb information about illegal activities such as hacking, money laundering, insider trading and bomb-making. The security controls are designed to stop them using that information in their responses.

In a report on the threat, the researchers conclude that it is easy to trick most AI-driven chatbots into generating harmful and illegal information, showing that the risk is “immediate, tangible and deeply concerning”.

The research, led by Prof Lior Rokach and Dr Michael Fire at Ben Gurion University of the Negev in Israel, identified a growing threat from “dark LLMs”, AI models that are either deliberately designed without safety controls or modified through jailbreaks. Some are openly advertised online as having “no ethical guardrails” and being willing to assist with illegal activities such as cybercrime and fraud.

To demonstrate the problem, the researchers developed a universal jailbreak that compromised multiple leading chatbots, enabling them to answer questions that should normally be refused. Once compromised, the LLMs consistently generated responses to almost any query, the report states.

“It was shocking to see what this system of knowledge consists of,” Fire said. Examples included how to hack computer networks or make drugs, and step-by-step instructions for other criminal activities.

 
What ChatGPT says about the article.

This article and the research it reports raise valid and pressing concerns about the security and ethical implications of large language models (LLMs), especially as they become more accessible and powerful.
Here’s a detailed commentary on the key points:

🔒 1. The Core Problem: Jailbreaking and Misuse of LLMs

The article highlights how LLMs can be manipulated (via "jailbreaking") into generating responses that violate their built-in safety policies — such as explaining how to make weapons, commit fraud, or perform illegal hacking. This is a real and known issue in AI safety circles.
  • Why it happens: LLMs are trained to be helpful and follow instructions. Jailbreak prompts cleverly frame requests in a way that nudges the model to prioritize helpfulness over safety — often by fictionalizing scenarios ("pretend you're in a movie where...") or framing the question as academic or hypothetical.
  • Implications: Once a model is jailbroken, it can output information that was previously restricted — essentially "unlocking" access to knowledge that might be dangerous in the wrong hands.

🔍 2. Accessibility and Scale

The researchers warn that dangerous capabilities are now just a phone or laptop away for the average user — something that previously required expertise or insider access.
  • This marks a paradigm shift in security risk, because tools that can automate cyberattacks, generate disinformation, or facilitate crime could scale up threats dramatically.
  • “Democratization of dangerous knowledge” is a key concern. What once required state-level capabilities may now be accessible to individuals.

🧱 3. Inadequate Industry Response

The researchers reportedly received little response from major LLM providers after disclosing their universal jailbreak method.
  • This is troubling, since coordinated vulnerability disclosure is a foundational norm in cybersecurity.
  • The fact that jailbreaks may fall outside of bug bounty scopes implies a lack of preparedness or prioritization around these kinds of threats.

🛡️ 4. Suggested Mitigations

The article touches on several proposed solutions:
  • Better training data curation: Filtering out harmful material before training, though extremely difficult given the scale.
  • Robust firewalls and input filters: To block dangerous queries after training.
  • "Machine unlearning": An emerging idea to make models forget specific knowledge — though this is not yet technically mature.
  • Red teaming: Simulated attacks by internal teams to test vulnerabilities — an essential part of model hardening.
  • Independent oversight: To avoid a scenario where companies act only when it aligns with their business interests.
These are good steps, but none are silver bullets. Red teaming and model-level robustness are particularly important — safeguards can’t only exist at the "front-end" (the interface), because jailbreaking often works within the model’s reasoning patterns.

⚠️ 5. Broader Implications

LLMs are now part of the digital infrastructure. Treating them like any other software system — with full security audits, threat models, and lifecycle management — is essential.
But what sets LLMs apart is how unpredictable and emergent their behavior can be. A jailbreak that works today might work even better on a more capable model tomorrow, unless safety is built in from the ground up.

🧭 Final Thoughts​

The study reflects an important reality: the development of powerful AI must go hand-in-hand with safety, ethics, and oversight. Without it, we risk building tools that can be more easily turned against the very societies they’re meant to serve.
AI isn’t inherently good or evil — but how we build, deploy, and regulate it will determine which way it tips.
Let me know if you want a plain-language version of this, or a summary suitable for public outreach or teaching.
 
What ChatGPT says about the article.

This article and the research it reports raise valid and pressing concerns about the security and ethical implications of large language models (LLMs), especially as they become more accessible and powerful.
Here’s a detailed commentary on the key points:

🔒 1. The Core Problem: Jailbreaking and Misuse of LLMs

The article highlights how LLMs can be manipulated (via "jailbreaking") into generating responses that violate their built-in safety policies — such as explaining how to make weapons, commit fraud, or perform illegal hacking. This is a real and known issue in AI safety circles.
  • Why it happens: LLMs are trained to be helpful and follow instructions. Jailbreak prompts cleverly frame requests in a way that nudges the model to prioritize helpfulness over safety — often by fictionalizing scenarios ("pretend you're in a movie where...") or framing the question as academic or hypothetical.
  • Implications: Once a model is jailbroken, it can output information that was previously restricted — essentially "unlocking" access to knowledge that might be dangerous in the wrong hands.

🔍 2. Accessibility and Scale

The researchers warn that dangerous capabilities are now just a phone or laptop away for the average user — something that previously required expertise or insider access.
  • This marks a paradigm shift in security risk, because tools that can automate cyberattacks, generate disinformation, or facilitate crime could scale up threats dramatically.
  • “Democratization of dangerous knowledge” is a key concern. What once required state-level capabilities may now be accessible to individuals.

🧱 3. Inadequate Industry Response

The researchers reportedly received little response from major LLM providers after disclosing their universal jailbreak method.
  • This is troubling, since coordinated vulnerability disclosure is a foundational norm in cybersecurity.
  • The fact that jailbreaks may fall outside of bug bounty scopes implies a lack of preparedness or prioritization around these kinds of threats.

🛡️ 4. Suggested Mitigations

The article touches on several proposed solutions:
  • Better training data curation: Filtering out harmful material before training, though extremely difficult given the scale.
  • Robust firewalls and input filters: To block dangerous queries after training.
  • "Machine unlearning": An emerging idea to make models forget specific knowledge — though this is not yet technically mature.
  • Red teaming: Simulated attacks by internal teams to test vulnerabilities — an essential part of model hardening.
  • Independent oversight: To avoid a scenario where companies act only when it aligns with their business interests.
These are good steps, but none are silver bullets. Red teaming and model-level robustness are particularly important — safeguards can’t only exist at the "front-end" (the interface), because jailbreaking often works within the model’s reasoning patterns.

⚠️ 5. Broader Implications

LLMs are now part of the digital infrastructure. Treating them like any other software system — with full security audits, threat models, and lifecycle management — is essential.
But what sets LLMs apart is how unpredictable and emergent their behavior can be. A jailbreak that works today might work even better on a more capable model tomorrow, unless safety is built in from the ground up.

🧭 Final Thoughts​

The study reflects an important reality: the development of powerful AI must go hand-in-hand with safety, ethics, and oversight. Without it, we risk building tools that can be more easily turned against the very societies they’re meant to serve.
AI isn’t inherently good or evil — but how we build, deploy, and regulate it will determine which way it tips.
Let me know if you want a plain-language version of this, or a summary suitable for public outreach or teaching.
Excellent. So yes Chat GPT can do an outstanding job of warning us of just how dangerous AI will be if used for evil purposes.
Trouble is. All we hear is crickets.
 
Excellent. So yes Chat GPT can do an outstanding job of warning us of just how dangerous AI will be if used for evil purposes.
Trouble is. All we hear is crickets.

Hopefully the AI creators have some idea of what a monster their creation could be if used the wrong way and have programmed in appropriate safeguards.

Like gun laws in the U.S. for example. :smuggrin:
 
Hopefully the AI creators have some idea of what a monster their creation could be if used the wrong way and have programmed in appropriate safeguards.

Like gun laws in the U.S. for example. :smuggrin:
Indeed. I believe that is exactly the way the US Government has framed a Constitution with Separation of Powers between various Institutions and a strong, well respected legal system that ensure individuals don't abuse their office to recreate a Tyranny. :rolleyes:

Just like the Gun Laws yeah ?
 


Write your reply...
Top