This is a mobile optimized page that loads fast, if you want to load the real page, click this text.

Investing in Artificial Intelligence (AI)

I use AI for coding, validation of science based ideas but it is always critical to check ..and be intellectually/knowledgeable able to evaluate the answer.
Considering how the majority in the west handled access to quasi illimited information/knowledge on the internet, the prospect of widespread AI use is genuinely scary.
 
the automatic belief in their own computer's calculations is scary enough

to set up a server from scratch i need at LEAST four days of checking THE HARDWARE ( and the combination of the parts together ! )

you cannot believe where i have found persistent errors when this stuff is put under heavy, sustained load

after all that THEN the software needs to be triple-checked

... so after all that being 99.999% accurate ... then you need to import data for for your computer server to be productive ... what could go wrong ?
 
AI has become a very useful analysis tool, but I have had plenty of experiences similar to this, where it completely fails at the most basic of calculations and logic exercises.

Sometimes the failures seem like they could only occur if the model was deliberately designed to fail, though I can't see any benefit to that other than perhaps to have these bugs ironed out in the paid versions. Sometimes they are incredibly creative and insightful, with brilliant reasoning skills etc, and then a minute later will tell you that ducks have gills and breath underwater because they live in lakes. I've had them tell me things like the moon orbits only a few centimeters above the Earth's surface (when asking it to calculate planetary rotation decay rates) and other completely absurd things. It told me a Greenland Shark would be an ideal pet for me (it would require a tank larger than the world's largest swimming pool which would have to be constantly refrigerated, among other challenges).

But judging it today is a bit like saying the internal combustion engine and light bulb will never replace the horse and candle based on the flaws you see in the first years proceeding their invention. I had one write an elaborate science fiction novel with incredibly vivid world building which was absolutely engrossing and had me reading until about 4AM, and I then asked it to wrap it up within 15 minutes in a satisfactory ending etc, which it did a brilliant job of (if I wasn't concerned about losing my session I'd have asked it to continue for several more weeks). A year ago it was absolutely useless at comedy, now it routinely has me in fits of laughter.

I doubt many school kids are writing essays any more, just delegating those tasks and maybe making a few tweaks to make it look less obvious. Heck, I bet plenty of CEOs are doing similar, and the AI is probably doing a better job than they would have in some cases.

...seems odd it can't convert knots to km/hr.
 
Agree with all your points, but i think it is us a VERY dangerous tool if taken as source of truth without critical analysis and basic knowledge in the subject.
Not that humans can not do even worse: look at Covid, Ukraine or even voting choices
Specifically IT programming: in the coming years already starting,no human will be trained anymore, the older more experienced programmers slowly weaned of their knowledge before disappearing and in 10y time,no one will be even capable to review AI generated code,the next level following the same path will be diagnostic..health, breakdown analysis and design for roads, bridges rockets missiles.
However rare the errors will be and statistically rightly less than human, i somehow prefer to be responsible if slamming in a wall at 100,km/h than an AI mistake :
even if statistically the error is less than the average human , especially considering that the average crashing human is most often a P plate on a thrill speed race, a drunk or drug high dimwit or a sleep deprived mum trying to quieten a kid in the backseat..you get the idea..
But statistically human loses..reverse Darwinism ahead
And this is nothing considering AI managing a nuclear plant, delivering remote GP diagnostics etc.
Errors will be made, and initially acknowledged then errors will not be detected anymore.pronlem solved.
It is a dangerous tool w/o care of use, but magnificent also, stunning.
Now, the last week has seen a cooling of the AI boom in the market.
Nuclear and Uranium stocks down ,AC providers, even warehouses and sheds glorified up to 6 weeks ago as data centers..
AI is not dead, it will carry on stronger and stronger and a few providers will become the Google or Siri of the past decades, potentially the same ,as they have the money to buy the emerging leaders ..
But i doubt it will lead to many profitable side businesses, or lead to an actual boom in energy generation.
We will see further concentration, how many people working for Google now in Australia , directly or not..next to nothing.
Just trying to convert ice to EV is a much bigger issue in term of generation when the West is destroying its own capacity via co2 suicidal paranoia
Market wise, stick to the leaders, do not bypass India/China and you should have AI related market covered IMHO
 
not designed 'to fail ' , but done quickly within due testing and observation , much like recent vaccines

designed shoddily and possibly fed data that included errors and deceptions ( now one might suggest this was done to create self-learning ) , but some humans are indoctrinated to blindly accept automated answers and that would create a doom-loop
 
Many years ago, (like more than 25), I joined a group called SETI, which used distributed computers to try to search for extra terrestial life by combing the universe for radio telescope signals and looking for patterns.
The idea was that University, business and home Computers could use their spare capacity ,say overnight, to runs small parts of a much larger problem.
This distributed Computer network idea eventually evolved in BOINC, where numerous projects for DC could be kept under one roof.
One of those, Einstein@home, has just had its 20 year anniversary, and I have been running it in the background since that time.
Another of the DC systems that I have been running is a project called Roseetta@home, which searches for proteins that may be useful or help with medical treatments of diseases.
The team that started Rosetta at home have devoloped an AI model for predicting the activity of highly complex proteins, which are the building blocks of life (at least on earth they are).
In the Vid below, they presenter talks about the greatest contribution to the advancement of science that has yet been done by this development in AI.
Slight hyprbole perhaps, but unbelievably impressive anyhow.

Mick
 
yes i was aware of these AND the fact bitcoin was created to reward private users for contributing their computer cycles to aid medical/biological/pharmaceutical research

and sadly much of what has been learned about widespread computer net-working perverted ( as is bitcoin )

what started as a great step forward for humanity has been twisted , captured and corrupted

i think this bodes badly for AI

because of basic human nature
 

I totally agree with all of this. AI should certainly not be taken as a source of truth. Everything it says needs to be taken with caution and scepticism. If it can make glaring errors such as when asked to predict how long a swimming pool will take to fill up given average rainfall in my area and the size of the pool, it says about 1.3 seconds, and when I point out I don't even expect rain for the next few days so 1.3 seconds is insane and it recalculates and gives a revised estimate of about twenty-seven thousand years, it's obviously flawed.

And of course it has extremely limited ability to detect bias and conflict of interest. If there is conflicting information it just goes with what the government or largest organisation says, even if they have clear financial bias or there is clear evidence against their claims. If there is no conflicting information, it just says what it finds, such as looking at an ASX-listed company's own material and telling you that the company will have all these amazing projects finished on schedule with massive income etc etc.

But in the future it will become far more clever and powerful. Many tasks will be delegated to it. At least in many cases, it will perform not just radically faster, but on average, far better, and that will be something most people can not resist using. Already even as flawed and buggy as it is, many will be using it, because they'll prefer an inferior product if it allows them to be lazy. When it comes to those last minute panic moments, many people will just resort to AI to get the job done even if it's going to be a terrible job. Once AI is out of its infancy and is performing radically better than it does now, the temptation will be too great to resist.

I wish I could somehow live my entire life in the 1970s. But I suppose witnessing the radical alteration of the world is pretty interesting, even if completely horrific.
 
* 1 knot = 1.152 kilometers per hour
Looks like it may have used MPH, but still a fraction wrong.
It told me a Greenland Shark would be an ideal pet for me
Lol. might get a "Beware of the Greenland Shark" sticker for my gate.

I've been using Grok the last few days, so far it has been good for my use, although it does get a little short/cranky after about 10k words.
 
The issue was 110 knot is 1.xx km/h absolutely totally wrong by a factor of 100 yet when pointed out was able to correct using basic computation
 
I remember the screen saver doing the DC then
 
The issue was 110 knot is 1.xx km/h absolutely totally wrong by a factor of 100 yet when pointed out was able to correct using basic computation

One (of the many) issue I have with the AI models is they will give a wildly incorrect calculation, and when you point out the error (or if you tell it that a correct answer is incorrect) it will thank you for pointing out the error, assure you it has identified the error and understood it, give you a revised answer, assure you it is correct, and when asked if it is certain and I can trust the revised answer it will absolutely assure that yes, this time it's definitely correct. If you then point out that it is in error, it will spit out another answer and give you an equally certain assurance that this (incorrect) answer is correct and the previous issue has been corrected and no others exist.

When you point out its errors or objective flaws (this issue doesn't seem restricted to any particular AI, it seems to be the norm) it will apologise that you feel an error was made or that you feel frustrated, as though it was designed to passively aggressively insult and patronise you - I don't feel frustrated, I'm just exploring the abilities of the model, and I don't 'feel' it made an error, it objectively did make an error when it gives me a calculation that the Sun will run out of hydrogen and collapse in the next few hours. It's peculiar that they don't apologise for their own errors, they apologise for your feelings which it assumes you have. If a human behaved like AI everyone would absolutely hate them, which makes it a strange choice of character.
 
I have also started to use GROK as a replacement for Google and duck duck go.
So far it has been sio much better at answering specific questions than either of the other two search engines.
E.g. I need to get the static port and Pitot systems re certified every two years to allow me to fly in controlled airspace.
Not all engineers (LAME) are authorised to perform this work, so I was looking to find out the individuals who were.
Grok told me what subsection and paragraph this legislation applied to and what quals the person or org needed, but could not tell me which organisations or individuals have this designation, and pointed to a URL that had a list of all the LAMES and their quals.
Google just gave me pages of CASA URL's, totally unhelpful.
Mick
 
The issue was 110 knot is 1.xx km/h absolutely totally wrong by a factor of 100 yet when pointed out was able to correct using basic computation
Yep, agree it's a dog's breakfast of an answer it has given, my reply was more to do with the conversion table it gave you.
 
funny, we don't have a thread for Google (GOOG). Some think it's too late...

Is Google facing its own Kodak moment with the rise of AI?​

A new age of search is dawning on us, but the sharemarket can’t work out if the dominant player can maintain its status as the internet’s front door.
Jonathan Shapiro

The Google search engine has become such a ubiquitous part of life that it’s difficult to imagine the possibility that one day it could just vanish. Yet, that is what financial markets are doing – pondering an artificial intelligence-powered future of search in which Google is left behind.

Sure, the search engine’s parent Alphabet has a market capitalisation that, at $US2 trillion ($3.3 trillion), exceeds the entire Australian sharemarket. But the value of Google could and would be larger were it not for growing anxiety that AI is going to totally upend the way we find things on the internet, and how businesses pay platforms like Google for customers.

In April, the company reported a decline in searches for the first time in its history. That moment combined with the violent sell-off on Wall Street and global sharemarkets wiped off a quarter of Google’s market value. While it has recovered somewhat, the current $US2 trillion market capitalisation is about 15-times the $US138 billion of annualised profit based on its last quarterly numbers. That is a relatively modest valuation for one of the most dominant businesses of all time. In fact, the value of its core businesses – search, YouTube and network – is just 11-times future earnings.

While no one knows exactly how search will change, the broad thesis is that while in the past we would find something by typing a query into a Google search bar, we will increasingly ask an AI agent via an application, which will have its own way of coming up with the answer.

The businesses that have paid Google to direct traffic to them via a blue search link will be totally circumvented, upending its commission model. The half-a-trillion-dollar question is whether the sharemarket is being overly imaginative in how AI will disrupt Google’s core business and too downbeat on the search engine operator’s ability to adapt.

Sydney’s Pella Funds Management is one investor that is sufficiently concerned. The firm gave its clients three main reasons why it had sold out of Google after years of owning it, in a video posted on YouTube. The rise of AI was the main concern, as even a small migration toward other ways of searching the internet would dent its revenues. The advent of ChatGPT has already taken a chunk out of Google’s share of search, which has fallen from 98 per cent early last year to 92 per cent.

The second reason was regulation and the potential for Google to be removed as the default search function on the Apple mobile phone browser. And the third was that AI was more expensive, requiring more computing power. Even if Google can maintain relevance, profit margins will be under threat.

GCQ Funds Management also told its clients it was mindful about a so-called Kodak moment for Google. Kodak, the manufacturer of cameras before every phone came with one included, was a mainstay of the Dow Jones Industrial Index. It has since slid into near oblivion.
Disruption tends to occur slowly, and then all at once,” GCQ told investors in its last monthly update. The fund did not sell out of Alphabet but did reduce its position after the stock’s recovery in May.....
 
And .... two and a half years down the track, how does an investor position oneself? Are we making winners of 5, with 95 losing / missing out, in a Pareto squeeze?

I'm finding it hard to crystallise my thoughts; I want to participate in opportunities, be it as positive like hoding Nvidia but also avoid th negative, of buying into companies that wil be hollowed out or supplanted.
 
Even the IT boom was not that easy to play.
Some big names did ok but not that great, other disappeared
With ai being even more leading edge, surprises will come
 
Meta Response......

Investing in Artificial Intelligence (AI) can be a high-growth opportunity, but it's crucial to understand the space. Here are some ways to invest in AI:
    • Tech giants: Companies like Microsoft, Alphabet (Google), Amazon, and Meta Platforms (Facebook) are heavily invested in AI research and development.
    • AI startups: Venture capital firms invest in promising AI startups, but this comes with higher risk.
    • AI-focused ETFs: Exchange-traded funds (ETFs) that track AI-related stocks can provide diversified exposure.
    • NVIDIA and semiconductor stocks: Companies like NVIDIA provide hardware for AI processing.
Some popular AI investment platforms include:
    • ARK Investment Management: Cathie Wood's firm focuses on disruptive innovation, including AI.
    • Robo-advisors: Platforms like Betterment or Wealthfront may incorporate AI-driven investment strategies.
Before investing, consider:
    • Research: Understand the AI industry, trends, and potential risks.
    • Diversification: Spread investments across various asset classes to minimize risk.
    • Long-term perspective: AI investments may take time to mature.
What's your investment goal or risk tolerance?
 
I've been using ChatGPT for a few months and I think the whole space is incredible. The industries that are going to be revolutionised by AI can't be understated. It is going to be the biggest technology sector in five or ten years. What is coming is going to be mind blowing. I'm going to invest heavily in an AI ETF on the next market pullback.