Australian (ASX) Stock Market Forum

September 2025 DDD

View attachment 209460


So @rederob would be quite correct. I do 'plaigiarise' from many sites, sources, persons (dead and alive) to cobble together a daily post.

Due to working a day job and trading, I don't always have time for the correct attribution to the various persons responsible when posting.

So this will be the last post on this series of threads.

jog on

duc
C'mon @ducati916

Get off your ass and Italian masterpiece motorbike and give us a few more September DDD before you start October DDD.

btw. I don't love you for your mind, just your body, handling and engine, but don't tell anyone else.





gg
 
Stuff that I find interesting from all over the internet. I will steal from everyone, alive or dead.

This is really just a market diary of stuff that is happening day-to-day in the markets. A pulse. I don't necessarily agree or disagree. I don't highlight my opinions, but you will (if you follow this) will come to recognise my voice in the posts.




Friday, September 26, 2025

Moscow’s restrictions on fuel exports, including a full export ban on gasoline and a partial one on diesel, have lifted ICE Brent futures above $70 per barrel this week, further buoyed by market participants distrusting OPEC+’s unwinding of its 2.2 million b/d voluntary cuts, seeing only a fraction of promised barrels in the market. Iraq will be the key price indicator of the upcoming weeks, with the potential restart of oil flows via Turkey adding to Europe’s regional supply.

Iraq Doubles Down on Kurdish Restart. Having signalled the restart of Kurdish crude deliveries via the Kirkuk-Ceyhan pipeline earlier this week, Iraq’s federal government reached agreements in principle with 8 oil companies producing 90% of Kurdistan’s output, specifying that first oil will flow on Saturday.

Russia Extends Gasoline Export Ban. The Russian government extended its ban on gasoline exports until the end of 2025, seeking to keep all barrels at home following a barrage of Ukrainian drone strikes on refineries, whilst also introducing a ban on diesel exports for non-producers, i.e. trading middlemen.

Traders Continue Their Global Expansion. Having previously bought Italy’s Saras refinery and Sweden’s Preem, global trading house Vitol has expandedfurther into upstream, having finalized its purchase of a 30% stake in Cote d’Ivoire’s multi-billion-barrel Baleine field from Italy’s ENI (BIT:ENI) for $1.65 billion.

Trump Cancels $13 Billion in Renewable Funds. The US Department of Energy plans to cancel $13 billion in funds dedicated by the Biden administration to subsidize wind and solar parks as well as electric vehicles, coming a day after President Trump dismissed climate change as the ‘greatest con job’.

Oil Majors Scale Back 2026 Buybacks. French major TotalEnergies (NYSE:TTE)might have pioneered a new trend of downscaling its share buyback programme next year, with its board voting to restrict quarterly buybacks to $0.75-1.50 billion, down from previous pledges of $2 billion per quarter.

US Drillers Remain Sceptical. The latest Dallas Fed survey on oil and gas activity in key oil-producing states of the US has brought forward the ‘twilight of shale’ as executives voiced their frustration with Trump’s tariff policies, with the company outlook index falling from -6.4 in Q2 to -17.6 in Q3.

Canadian Gas Producers Curb Output. Record low natural gas prices in Canada, with Alberta’s benchmark AECO spot prices trading at an average of -$0.18 per MMbtu this week have prompted the country’s oil and gas companies to aggressively cut output, absent any incremental LNG offtake outlet.

Serbia’s Only Refinery Awaits Its Ordeal. The Trump administration will impose sanctions on Serbia’s Russian-owned oil company NIS from October 1, according to the country’s President Aleksandar Vucic, citing Gazprom Neft’s 44.9% stake in the company that operates the 100,000 b/d Pancevo refinery.

French Unions Look Forward to Week-Long Strikes. France’s largest industrial trade union CGT vowed to disrupt port operations and block ships from discharging until its salary demands are met, prompting terminal operator Elengy to invoke force majeure at three sites, temporarily halting cargo discharges.

BP Scales Back Its Renewable Ambition. Publishing its annual Energy Outlook, UK oil major BP (NYSE:BP) pared back its call of global crude demand peaking in 2025 at 102 million b/d, pushing peak oil out to 2030 (at 103.4 million b/d), blaming the change on weaker-than-assumed energy efficiency gains.

World’s Most Important Copper Mine Grinds to a Halt. US mining giant Freeport-McMoran (NYSE:FCX) declared force majeure at its Grasberg mine in Indonesia, one of the largest copper and gold mines globally, after a landslide blocked access to underground parts of it, trapping seven miners and killing two.

Russia to Build Four Nuclear Plants in Iran. Tehran has signed a $25 billion agreement with Russia’s state nuclear corporation Rosatom to build four nuclear power reactors in the country, with Iranian media suggesting the 5 GW capacity site would be located in the southeastern province of Hormozgan.

Europe Plans Tariffs on Chinese Steel. According to German newspaper Handelsblatt, the European Commission plans to impose tariffs of 25% to 50% on Chinese steel and related products over the upcoming weeks, even though Europe accounts for only 4% of China’s steel exports, as of end-2024.




Small-caps are stepping into the spotlight after sitting on the sidelines for a long stretch. These stocks have been waiting for their moment, and it looks like that moment is finally here.

The Russell 2000 ETF just closed at new all-time highs for the first time in 969 days — ending its longest drought ever.

In fact, the Russell 2000 ETF finally closed at new all-time highs for the first time in 969 days — ending its longest drought ever.

1758234075010_russell_01K5FENQVBX469CNGMJD6SQV9V.png




Now it’s their turn to go on offense and prove they can sustain this strength. If small-caps can keep leading from here, it would be a huge confirmation of risk appetite and a fresh tailwind for the broader market into the year-end.

A lot of these names are already participating — Industrials, Tech, Gold miners.

These are offensive groups pushing higher.

And really, it’s tough to picture this bull market wrapping up before small-caps even register a clean breakout.

That’s been a key reason we’ve stayed constructive and eager to keep putting money to work in equities.

Our Chief Market Strategist Steve Strazza walked through this yesterday in a live session, highlighting actionable setups.

One standout: the 2-to-100 Hunt.



8883852597_2-100%20mini_01K62TBDESK0Q2MKN5TJG23J9Z.png







  • The S&P 500 ($SPX) fell for the third straight day, but the pullback has been mild, slipping just -2% on an intraday basis and -1.3% on a closing basis.

  • Frank notes that short-term momentum swiftly corrected from extremely overbought levels on Monday to nearly oversold levels. Importantly, it didn't quite reach oversold today, indicating a resilient underlying bid.

  • Before pulling back, RSI confirmed the last all-time high on Monday, rather than diverging like it typically does before significant tops. However, immediate follow-through is necessary, and a potential oversold reading would suggest a deeper pullback than the previous ones.
The Takeaway: A -2% decline was all it took for short-term momentum to correct from extremely overbought levels to nearly oversold levels. It avoided oversold territory today, signaling resilience, but immediate follow-through is necessary, and a potential oversold reading would be a red flag.






The historic period of American exceptionalism in the stock market is over.
That ended last year and the rest of the world had been playing catch up throughout 2025.
While the United States stock market is on pace for another epic year, it’s actually one of the worst-performing countries in the world.
That’s not so much a knock on America as it is an impressive run from foreign equities.

Foreign Investors Didn’t Get the Memo

We have the data. We see the changes in dynamics between U.S. and foreign markets.
But the investors themselves still haven’t pivoted.
Here’s a great chart from our friends over at Ned Davis Research. Foreign investors now have a record allocation to U.S. equities:
image-115-1024x756.png

This chart shows the value of foreign-held U.S stocks as a percentage of foreign held U.S. financial assets.
It’s never been higher – now above 30%. Notice how this was just 12% 15 years ago.
Foreign investors had little to no confidence in U.S. stocks at the moment when U.S. stocks went on the most important run of global outperformance in stock market history.

Now that this run is over (in our opinion), foreign investors have never had this much faith in U.S. stocks.
Perfect.
I’m happy to take the other side of that.

China Bull

When it became obvious last fall that Donald Trump was going to win by a landslide, the question was simple: What’s the trade?
Based on consensus at the time, everyone assumed Trump 2.0 would be bad for China.

Even the Chinese didn’t want Chinese stocks.
They still don’t.
No one wants Chinese stocks.
I do.

They’re working really well. And my bet is they’ll keep working on an absolute basis and also outperform the U.S.
Just when the rest of the world finally realized how dominant the U.S. stock market has been, we’re flipping that around and letting them have it.
I’ll happily take the foreign equities, thank you.
Stay sharp,


jog on
duc
 
Screenshot 2025-09-29 at 1.56.13 PM.pngScreenshot 2025-09-29 at 1.56.28 PM.pngScreenshot 2025-09-29 at 1.56.39 PM.pngScreenshot 2025-09-29 at 1.57.01 PM.pngScreenshot 2025-09-29 at 1.57.16 PM.png



Full:https://www.construction-physics.com/p/whats-happening-to-wholesale-electricity


  • As investors, we either adapt or we fail.
  • We've changed some definitions.
  • We play based on reality.
On September 22, 2025, the definitions changed.

Small-caps, mid-caps, large-caps — the way we've classified companies for decades no longer fits the reality of today's market. The world has moved on, but the numbers everyone keeps using are stuck in the past.

I'm here to update the rules. You don't have to like it. You don't even have to agree with me. But this is where we are now. And if you want to keep up, you'll need to adjust.

So let's start at the beginning. What even is a "small-cap" stock? How small is too small? How big is too big?

According to ChatGPT, a small-cap stock is worth between $300 million and $2 billion. Gemini says the same thing.

Fidelity also says $300 million to $2 billion.

Meanwhile, Wikipedia says it's between $250 million and $2 billion. So does Investopedia.

But, respectfully, they're all wrong.

"We've Always Done It This Way"


Near the turn of the century, when I first started in this business, the market-caps were set.

Small-caps are companies between roughly $300 million and $2 billion. Micro-caps were anything smaller than $300 million.

On the other side of the spectrum, large-caps are any stock worth more than $10 billion, while mid-caps are the "Jan Bradys" of the world – anything between $2 billion and $10 billion.

That's it. Those are the levels.

And, to many, these numbers still apply.

"Because we've always done it this way" is the worst excuse a human can use, especially for market professionals.

Take Nvidia (NVDA). At $4.2 trillion, it's worth more than the entire Russell 2000 small-cap index combined. Trillion-dollar companies didn't even exist until 2018, when Apple (AAPL) became the first to cross the mark.

Since then, Microsoft (MSFT), Alphabet (GOOG), Amazon.com (AMZN), Meta Platforms (META), Tesla (TSLA), Broadcom (AVGO), and even Berkshire Hathaway (BRK.A) have joined the club.

The world has changed. Market-cap inflation is real. And if you're still pretending a $10 billion company belongs in the "large-cap" bucket, you're playing a game that no longer exists.

As investors, we have to adjust. Sticking with outdated definitions isn't just lazy. It's dangerous.

The New Rules


Earlier this week, we decided to officially change the rules. Now, small-caps are between $1 billion and $10 billion.

If you're not at a billion, then I can't even take you seriously. Below $1 billion and your stock is a micro-cap.

In the old days (and according to many sources this is still the case), any stock above $10 billion was a "large-cap." That's laughable. By our math, $10 billion still makes you a small-cap.

Officially, a large-cap stock from now on is worth between $30 billion and $200 billion.

And mid-caps are the companies larger than small-caps ($10 billion) but smaller than large-caps ($30 billion). There are currently about 350 companies in the United States that fit this new mid-cap criteria.

Finally, there are 50 companies in America right now worth more than $200 billion. Those are now officially the mega-cap stocks.

At the end of the day, markets evolve, and so should the way we define them. Pretending 20-year-old thresholds still make sense in a world where single companies are worth trillions is lazy at best, irresponsible at worst.

That's why I've redrawn the lines.

dbe144689642bca16161ab5392405d-new-marketcap-rules.png

Call it market-cap inflation. Call it common sense. Either way, these are the new rules.

Stop clinging to outdated definitions. If you want to navigate today's markets, you need to frame the playing field as it actually exists — not as it did in the 1980s.

Someone had to set the record straight. I just did.

Now the only question is, are you going to keep pretending, or are you going to play by the rules of reality?



jog on
duc
 
Summary

Indices: Russell 2000 +0.97% | Dow +0.65% | S&P 500 +0.59% | Nasdaq 100 +0.44%


Sectors
: All of the 11 sectors closed higher. Utilities led, gaining +1.63%. Consumer Staples lagged, but still finished positive +0.24%.

Commodities: Crude Oil was flat at $65.19 per barrel. Gold rose +0.25% to 3,790 per oz.

Currencies: The US Dollar Index fell -0.27% to $98.18.

Crypto:
Bitcoin is currently up +0.42% to $109,484. Ethereum is up +3.88% to $4,026.

Volatility: The Volatility Index dropped -8.61% to 15.29.

Interest Rates: The US 10-year Treasury was unchanged at 4.174%.



  • Despite today’s bounce, Bitcoin is on pace for its worst week since March, down -5% with two days left in the week. It's still up +1% for September with three days before the monthly close.

  • Alfonoso notes that Bitcoin is testing a critical area of support between $107k - $110k. It peaked at this level after the election and again in January, before breaking out and turning into support this Summer.

  • The AVWAP from the April low further reinforces the significance of this area. Price has formed a potential Head & Shoulders Top in recent months. However, breaking above the right shoulder at $118k would invalidate the top and open the door for fresh highs.
The Takeaway: Bitcoin is on track for its worst week since March as it tests a critical area of support around $107 ~ $110k. The outcome of this test will be very revealing as Bitcoin has been an excellent leading indicator for Stocks and risk appetite.



One of the best ways to read the market is by watching where money is moving.

Capital flows tell you who’s leaning in, who’s sitting back, and how much risk investors are willing to take.

A favorite tool I use for this is the Consumer Discretionary vs. Consumer Staples ratio

Historically, when this ratio points higher, it signals an environment where market participants are rewarded for buying stocks, not shorting them.

And right now, it is breaking out to its highest level ever.

xly%20xlp%20and%20spy.png
So whenever you doubt, look at this chart.

New highs like these are exactly the kind of evidence that suggests we should be focusing on stocks to buy, not stocks to sell.

It’s been working pretty well so far. And as long as it holds, the path of least resistance remains higher — giving the bull market plenty of fuel into the year-end.

Of course, a more "cautious" scenario exists if the breakout fails and the ratio drifts back into its old range.

But for now, everything points to a market environment that rewards those leaning into risk.





screenshot-2025-09-27-at-6-01-33-pm-png.png
screenshot-2025-09-27-at-6-01-46-pm-png.png
screenshot-2025-09-27-at-6-02-04-pm-png.png
screenshot-2025-09-27-at-6-02-26-pm-png.png
screenshot-2025-09-27-at-6-02-39-pm-png.png
screenshot-2025-09-27-at-6-03-12-pm-png.png
screenshot-2025-09-27-at-6-03-12-pm-png.png
screenshot-2025-09-27-at-6-03-32-pm-png.png

screenshot-2025-09-27-at-6-06-36-pm-png.png




Are we in an AI bubble?

That's the question on a lot of investors' minds right now. Nothing seems to matter more given how much the generative-AI boom has helped drive the U.S. stock market in recent years.

Since OpenAI's ChatGPT launch in November 2022, the S&P 500 Index has returned around 70%. And Nvidia (NVDA), Microsoft (MSFT), Meta Platforms (META), and other mega-cap tech stocks have added more than $15 trillion in market value.

Sam Altman, founder of OpenAI, sounded a somewhat tepid warning in August saying that people might be getting “overexcited” about AI.

More recently, Mark Zuckerberg, founder and CEO of Meta Platforms, admitted that it's "quite possible" that we're in an AI-induced bubble, or at least headed for one.

Zuckerberg drew parallels between the current AI boom and other large infrastructure build-outs in the past that led to bubbles, such as the 19th century railroad mania and the late 1990s dot-com bubble. And he alluded to the hundreds of billions of dollars that Meta Platforms and other hyperscalers are throwing at data centers and other AI-related investments, with little regard for return on investment.

Indeed, there are striking similarities between the current environment and that of the late 1990s.

Judging from equity valuations, few investors are truly worried right now. But the big worry should be that we’re headed for a repeat of the dot-com bust.

The bad news is that there probably will be a bust after such unrestrained investment.

The good news, if you could call it that, is that the comparisons between the peak of "irrational exuberance" in the late 1990s and the current AI boom are misguided.

I think many people have forgotten just how bonkers the dot-com era was.

In the lead up to the bubble peak, people lost their minds...

It was a proper mania complete with unbelievably reckless trading, out of control price action, and beyond absurd valuations.

While things may seem crazy now, in many ways there’s no comparison to the situation in late 1999 and early 2000.

Here are five reasons why the dot-com bubble was bigger and badder than the current AI boom, and why we’re not in a mania phase… at least not yet.

No. 1: Dot-Com Era IPO 'Pops' Remain Unmatched

Nothing screams "FOMO" (fear of missing out) like a massive IPO return on the first day of trading...

Last March, Newsmax (NMAX) went public in a $75 million IPO. The offer price was $10 per share. Inexplicably, buyers were so aggressive that the stock closed its first day of trading above $83 – up more than 730%. The IPO set a record for its first-day "pop."

There have been other spectacular IPOs this year. For example, Figma (FIG) and Circle Internet (CRCL) posted first-day gains of 250% and 168%, respectively.

The dot-com bubble featured its own massive IPO pops. For example, on December 9, 1999, shares of computer server and workstation company VA Linux Systems were up nearly 700% on their first day of trading.

However, the dot-com IPO market was far more ludicrous than the one today.

In 1999, there were more than 470 U.S. common stock IPOs with an offer price of at least $5. The average and median first-day return from the IPO offer price to the close on the first day of trading were about 71% and 57%, respectively.

The momentum continued into 2000, when 380 companies went public. Shares popped an average of 56%.

Picture1-5.png
There have only been 67 IPOs this year. And their average and median first-day gains were 35% and 14%, respectively.

While the new-issues market is certainly getting hot, the IPO fever isn't nearly as intense as it was during the dot-com bubble.

No. 2: The Dot-Com Bubble Had Crazier Stock Price Moves

In 1999, telecom company Qualcomm (QCOM) decided to shed businesses to focus on advanced wireless technology and chip design.

The market loved the strategy so much that Qualcomm's stock rocketed 2,600% higher that year, rising from a market cap of about $4 billion to more than $113 billion.

Moves like that helped power the Nasdaq 100 Index, which was the epicenter of the dot-com bubble. The Nasdaq 100 includes the 100 largest nonfinancial companies listed on the Nasdaq stock exchange. And it contained many of the high-flying tech and telecom stocks, like Qualcomm, that went parabolic during the late 1990s bull market.

From early 1995 to March 2000, the S&P 500 returned more than 260%. But the Nasdaq 100 soared by an amazing 1,080%.

Since the lows in 2020, the Nasdaq 100 is up about 240%. That's solid, but it's nothing compared with the late 1990s bull run.

The following chart shows the Nasdaq 100 since 1985.

Picture2.png
It's a log chart, so equal distances on the y-axis have the same percentage changes. It also shows the long-term trend line and standard deviation channels.

The dot-com mania is clearly visible as the big outlier. All the other moves look tame in comparison.

No. 3: Index Valuations, When Measured Properly, Are Still Below Record Levels

You may have seen the scary headlines warning of sky-high valuations, like this one from the Wall Street Journal last month:

U.S. Stocks Are Now Pricier Than They Were in the Dot-Com Era
Here's another one from Yahoo! Finance:

This Warren Buffet indicator is bright red. Why it could be worse than the 1999 bubble – and how to prepare.
Then there was a research note from Torsten Slok, the chief economist at private-equity and alternative asset management firm Apollo Global Management:

AI Bubble Today Is Bigger Than the IT Bubble in the 1990s
It went on to assert that the top 10 companies in the S&P 500 today are more overvalued than they were back then.

I could go on and on.

It can be confusing for investors reading these warnings because there's no universally agreed-upon way to value the broader stock market. And every valuation metric has flaws.

Take the price-to-sales (P/S) ratio. Put simply, it compares the market value of a stock or index with annual sales (revenues).

Currently, P/S shows the market to be far more expensive than it was in 2000. Yet, the P/S ratio is one of the worst metrics for valuing the S&P 500 because it's blind to margins and profitability.

Other metrics, such as market cap to gross domestic product (or gross national product) and Tobin's Q have their own fatal flaws. The cyclically adjusted price-to-earnings ("CAPE") ratio, one of the most popular valuation metrics, has faults, too. The CAPE ratio is the real (inflation adjusted) index value divided by the 10-year average of real earnings for the index. It smooths out the variation in earnings over the business cycle.

The CAPE ratio peaked at more than 44 in 1999. It's currently around 40, suggesting this is the second-highest valuation the market has ever attained.

Picture3.png
Faith in the CAPE ratio has waned because there has been a structural shift in the indicator only seen with the benefit of hindsight. The CAPE ratio's average up to 1990 was 14.6. Ever since, the average has been 27. And it has been above 27 since 2016.

Valuation metrics can be useful tools. But if the tool you followed made you bearish since 2016, then how was the tool of use?

It's nice to smooth out earnings volatility. But averaging 10 years of trailing earnings is like staring in the rearview mirror with binoculars.

The S&P 500 is now dominated by Big Tech companies. They're highly profitable and growing quickly.

Nvidia, Microsoft, Apple (AAPL), Alphabet, Amazon (AMZN), Meta Platforms, Broadcom (AVGO), and Oracle (ORCL) now account for about 35% of the S&P 500's market cap.

Aggregate earnings for these companies have grown from about $90 billion a decade ago to around $560 billion today – a more than sixfold increase.

Picture4.png
Not only that, but these Big Tech companies are also expected to earn $670 billion over the next four quarters. Perhaps these estimates are a bit too high. But the growth has been undeniable and is one reason why the CAPE may be overstating current valuations.

Arguably, the trailing price-to-earnings (P/E) ratio, based on the last four quarters of earnings, and the forward P/E, based on consensus analyst estimates, are better.

These metrics have their flaws as well. But we can still use them to make a reasonable comparison between periods.

Here's the forward P/E for the S&P 500 since 1995:

Picture5.png
It's currently at 23, which is high but not quite as high as the dot-com-bubble peak of around 26.

No. 4: The Largest Companies in the S&P 500 Are Still Cheaper

Microsoft led the bull market in the late 1990s...

Its stock peaked in December 1999, three months before the broader market. But the company remained the largest in the S&P 500, with a roughly $560 billion market cap when the index topped on March 24, 2000.

On that day, Microsoft's trailing P/E and forward P/E were both above 60. But that was nothing compared with the second-largest company, Cisco Systems (CSCO), which had a forward P/E of 137. Oracle was another top-10 stock in the S&P 500 with a triple-digit P/E of around 118.

The table below shows the full list of the top 10 stocks on March 24, 2000.

Picture7.png
The average forward P/E was 53, although that was dragged up by the triple-digit outliers. The median forward P/E was about 35.

I've also included P/E ratios that treat these stocks as a 10-member market-cap-weighted index, called S&P 10. (Note, this is not the same as a market-cap-weighted average of the P/E ratios.)

The S&P 10's forward P/E ratio was nearly 39 back then. And I think that's the most relevant number.

The table below shows the same ratios and statistics for the largest stocks in S&P 500 as of September 22, the day of the highest index close as I write.

Picture8.png
There's only one set of triple-digit P/Es, and that's Tesla (TSLA). Microsoft is the only company that appears on both lists, and its forward P/E is about half of what it was in March 2000.

Every summary statistic in the AI boom table is below its corresponding value in the dot-com bubble table.

The chart below compares the S&P 10 P/Es.

Picture9.png
This clearly shows that the largest stocks in the S&P 500 are still cheaper now than they were at the height of the dot-com bubble. (This is also true using free-cash-flow yield, a valuation method that deserves its own write-up.)

The largest companies today might be expensive, but they're not record expensive. I know that's hardly comforting, but its further evidence that we're not at mania levels.

I will also concede that the S&P 500 is more concentrated than ever. The top 10 stocks in the index accounted for around 25% of total index market cap at the peak in 2000. Today, they total roughly 40%.

No. 5: The 'P/E Above 100' Club in the Dot-Com Era Was Larger

The Nasdaq 100 was completely devastated as the dot-com bubble burst. From its peak in March 2000 to the end of 2002, it fell as much as 83%.

That's why headlines today warning about record valuations catch people's attention. It suggests that a similar outcome is possible, or even likely.

Yet, the Nasdaq 100 reached a truly absurd valuation in 2000. Its trailing P/E reached nearly 100.

Picture10.png
On the bright side, the largest stocks in the Nasdaq aren't nearly as expensive as they were during the dot-com bubble. The Nasdaq 100's trailing P/E is currently about 34. On the other hand, the index is also well above the 10-year average P/E of about 27.

A triple-digit P/E is a nosebleed valuation. Only companies with great businesses that are growing very quickly deserve a P/E that high. And even then, those stocks will probably be decimated from time to time as they grow into their valuations.

To look at stocks with a P/E above 100, I will broaden my universe out to the top 1,000 largest U.S. companies by market cap. (Note, this is not the Russell 1000 Index. It includes companies that aren't in any index.)

Among the largest 1,000 stocks, there are currently 29 with a forward P/E greater than 100. Their median P/E is 137.5.

I already mentioned Tesla. The "P/E above 100" club also includes highfliers like Palantir Technologies (PLTR), Cloudflare (NET), Snowflake (SNOW), Roku (ROKU), CrowdStrike (CRWD), Figma, and Circle Internet.

Not to be outdone, the dot-com bubble had its own list of crazy valuations, such as Cisco, which I mentioned earlier. Stocks with forward P/Es greater than 100 in early 2000 also included Broadcom, VeriSign (VRSN), eBay (EBAY), Qualcomm, Time Warner, and Oracle.

On March 24, 2000, there were 62 stocks with forward P/Es greater than 100. And their median P/E was nearly 247.

Picture11.png
So, there were more than twice as many egregiously valued large- and mid-cap stocks during the dot-com bubble as there are today. And their valuations were much more extreme back then.

The Verdict: We're a Far Cry from Peak 'Dot-Com' Territory

So, there you have it... The dot-com bubble featured a hotter IPO market and more shocking price moves. It was more broadly expensive than the current market. That's also true for the top 10 stocks. And tech stocks were far more expensive in early 2000 than they are now. Plus, the number of stocks with nosebleed valuations was far higher during the dot-com bubble.

Don't get me wrong... Things are getting crazy.

The stock market has gotten extremely expensive. Just because it's not at a record valuation doesn't mean we shouldn't be worried. At the very least, we should expect relatively low, long-term returns from here.

There are also signs of excessive speculation everywhere, including soaring options volumes, ballooning leveraged exchange-traded fund assets, meme stock (and coin) pumping, pre-revenue companies attaining multibillion-dollar valuations, and quantum-computing stocks going nuts.

By all means, be disciplined and follow your investing strategy. Make sure your portfolio is diversified. Hold some defensive stocks. Sell stocks that have unrealistic expectations built into their valuations. Hold some cash.

But whatever you do, don't claim that this market is crazier than the peak of the dot-com bubble.



jog on
duc
 
screenshot-2025-09-28-at-7-27-23-am-png.png
screenshot-2025-09-28-at-7-27-38-am-png.png
screenshot-2025-09-28-at-7-41-08-am-png.png
screenshot-2025-09-28-at-7-49-18-am-png.png
screenshot-2025-09-28-at-7-55-20-am-png.png
screenshot-2025-09-28-at-8-01-08-am-png.png
screenshot-2025-09-28-at-9-18-29-am-png.png



Essentially evidencing that 'momentum' is slowing in stocks, bonds and BTC, but steady in gold.


  • Monday:
    • One of the largest integrated freight & logistics stocks, FedEx $FDX, beat its headline expectations and rallied +2.7% as a result. This snapped a streak of 4 consecutive negative earnings reactions.
    • Following a mixed report, the $33B residential construction giant, Lennar $LEN, suffered its 8th consecutive negative earnings reaction. No stock in the S&P 500 has a longer beatdown streak than this one.
  • Tuesday:
    • There were no S&P 500 earnings reactions on Monday, so we couldn't help but tell you about one of the hottest growth stocks in the market, American Battery Technology $ABAT.
    • Over the past year, ABAT's sales have increased more than 11x, and the price is on the cusp of climbing out of a massive bearish-to-bullish reversal pattern. With the technicals confirming the fundamentals, we think this name is going much higher.
  • Wednesday:
    • The only earnings reaction on Tuesday came from the $69B specialty retailer, AutoZone $AZO, which had a slightly negative post-earnings reaction after the company posted worse-than-expected headline numbers.
    • As we told in the last Weekly Beat, we were expecting another negative earnings reaction because of the bearish fundamental trends that have developed over the past few quarters. This latest quarterly report reiterated those negative trends.
  • Thursday:
    • On Wednesday, the provider of corporate uniforms, Cintas $CTAS, beat its headline expectations, but had a slightly negative earnings reaction. Despite the short-term weakness, this is one of the best long-term stocks in the market, highlighted by 42 consecutive years of dividend increases.
    • We also heard from the world's fifth-largest semiconductor company, Micron Technology $MU, which was punished for smashing its top and bottom line expectations. As we mentioned in the last Weekly Beat, we have very low conviction in this stock's technical breakout because the fundamentals do not confirm it.
  • Friday:
    • On Thursday, the $144B consulting giant, Accenture $ACN, beat its headline expectations and suffered a -2.7% post-earnings reaction. The market's negative reaction was less about the past quarter and more about the forward guidance, which was significantly weaker than anticipated.
    • Last, but not least, the $7B auto & truck dealership company, CarMax $KMX, missed expectations across the board and crashed lower as a result. This company is a complete disaster, and we believe it should be kicked out of the S&P 500.
What's happening next week
Calendar%20(09.28.2025)_01K685X8RFEP1B7Z6RV55H6EJJ.png
Next week, we'll be focusing on the world's largest footwear & accessories company, Nike $NKE. The stock is coming off one of its best earnings reactions ever, and we believe the bulls are likely to show up again after this quarter's earnings report.

Beyond NKE, we’ll also be watching:
  • The cruise line giant, Royal Caribbean $RCL.
  • The Wall Street firm, Jefferies Financial $JEF.
  • And the human resources software behemoth, Paychex $PAYX.
In addition, we'll hear from the egg producer, Cal-Maine Foods $CALM, the frozen foods producer, Conagra Brands $CAG, and the precious metals explorer, Novagold Resources $NG.

This is one of the final weeks before the end of the current earnings season, so it'll be on the slower side. However, there will be plenty to cover at the Daily Beat.

Now, let’s dive into the top setups heading into next week.
Here's the setup in NKE ahead of Tuesday's earnings report
63763217_image%20(2837)_01K685XS7KVA68M5R83TEAV1S1.png
Nike is expected to post $10.99B in revenue and EPS of $0.27 after Tuesday's closing bell.

Heading into the report, the price is retesting a key level of interest. This is the same place that marked the top back in 2015, and it was flipped into support in 2018 and 2020.

Now, we're back at the scene of the crime.

The bears tried to break this key level during the Tariff Tantrum earlier this year, but failed miserably. Last quarter was the best earnings reaction ever as the company posted better-than-expected results and forward guidance.

We believe the squeeze is on in NKE as long as the price holds above 68.
Here are the past 3 years of earnings results & reactions for NKE
Snapshot%20(09.28.2025)_01K685XWNGHW28073KNDZCC3NT.png
Nike is coming off one of its longest beatdown streaks ever. From late 2023 to early this year, the stock had six consecutive negative earnings reactions.

This historic bearish streak came to a decisive end last quarter, and the odds favor a new bullish technical and fundamental regime.

In addition, the key level of interest we mentioned previously adds to our conviction. We highly doubt one of the greatest American brands ever will complete a massive top amidst a raging equity bull market.

We expect another significant upside move in NKE following Tuesday's earnings report.
Here's the setup in CCL ahead of Monday's earnings report
63761777_image%20(2838)_01K685XQTKM80NZA0QYDHB3MGR.png
Carnival is expected to report $8.10B in revenue and EPS of $1.32 after Monday's closing bell.

Ahead of the call, the price is flirting with the resolution of a massive multi-year bearish-to-bullish reversal pattern. A positive reaction this week could be the catalyst to decisively mark the beginning of a brand-new primary uptrend, which could last for years.

Its biggest competitor, Royal Caribbean $RCL, has already broken out to new all-time highs. We believe there's a good chance CCL will catch up soon.

We're looking for CCL to gap-n-go above 31 following its earnings report. If that happens, the path of least resistance will be higher for the foreseeable future.
Here are the past 3 years of earnings results & reactions for CCL
Snapshot%20(09.28.2025)_01K685XTNACEEPHWVH0RB161WC.png
As you can see, the EPS growth for Carnival has been horrendous over the past three years. However, last quarter, this changed in a significant way, and the market responded with one of the strongest earnings reactions in years.

The market is expecting similar earnings growth this quarter, and if the company delivers, we anticipate the market will respond positively.

Also, notice the post-earnings drift. Before the paradigm shift last quarter, the stock had experienced negative post-earnings drift in seven consecutive quarters.

The tide is turning for this company, and we expect another strong earnings report before the opening bell on Monday.

If the CCL gaps above 31, we expect there to be a big chase higher for the foreseeable future, creating a textbook gap-n-go.




We start with the best sectors, then drill into the subgroups. We pick one, and then take a look at the top stocks in it.

This week’s standout is Energy, which just jumped to the second spot in our sector rankings.
202025-09-27T074114.824_01K67STAS4KP2F2JSKXN731SKT.png
Energy was the best-performing sector this week, with $XLE reaching its highest level since early April.

Beneath the surface, many individual stocks are breaking out of short-term consolidation patterns. This suggests we could see some follow-through leadership from energy in the coming weeks.

Here’s a look at our overall industry rankings, which show Energy Equipmentstocks reaching the sixth spot.
202025-09-27T074120.553_01K67ST8SYKAKR0FYAVWWB0DJM.png
This group supplies the machinery, tools, and services that energy producers rely on to explore for and extract oil and gas.

The relative strength from equipment names has been off the charts of late. This is a risk-on signal for the broader energy trade.

Below are the Top 10 names in the Energy Equipment subsector, ranked by relative strength.
1060636_download%20(65)_01K67ST4VEP5XBGA306N8796MJ.png
This week’s spotlight stock is Solaris Energy Infrastructure $SEI:
1759051059221_sei_01K67ST3F76Q1Y9HH9R7C9229T.png
The stock carries a massive 30% short interest with a 5x days-to-cover ratio as price presses against the upper bounds of a one-year base.



Just when you think things couldn’t get any more nuts... On the day before this market started to slip – after hitting yet another high – a tweet with this videostarted making the rounds. If nothing else, it showed just how far the sports gamification of securities trading has become. Here’s a screen shot...
at%202.23.05%E2%80%AFPM_01K66GSYZA9PJX5GZ703F6CNWT.png
Did that mark the tippy top? Is this the beginning of the end of this edition of the ease of the squeeze? To repeat what I always say: I have no idea. But as I wrotelast week in “The Trillion Dollar Question,” the velocity of what we’re seeing in some of these names is simply not normal. Neither are scenes like the ones above, which happens to be at something called Perp-DEX DAY in South Korea, which its promoters tout is “reimagining perpetual futures trading with the energy and excitement of an esports championship.” YEEEEhaw!! At least that’s the way it was explained in various Instagram posts, like this one...
at%206.37.43%E2%80%AFAM_01K66GWEJTXR0V23WXXGV8P7EY.png
So far, this mostly involves crypto. But as the Wall Street Journal explained a few days later in an article on the topic...
Known as perps, the contracts give traders access to extreme leverage and have exploded in popularity during a rally that has sent bitcoin prices up more than 70% over the past year. Though popular in other parts of the world, perps were largely unavailable until recently to U.S. traders on regulated venues.

Their emergence is a sign that financial markets, which have steadily grown riskier since 2020, will likely keep growing more speculative.
That this is in South Korea should be no surprise. “Korean Capers,” as I’ve called it, has been evolving as a tail-wagging-dog element of this market for quite some time. This is the latest list of the most active U.S. stocks traded in South Korea...
at%203.07.23%E2%80%AFPM_01K66R660NBRR6NQEZZJ3FV6Y8.png
Among the top 25, note the number of super-duper leveraged ETFs, plus the others – all of them on one speculative list or another. No wonder some of these names are trading more on hype – and maybe not even hope – than fundamentals. As I’ve pointed out previously, but is worth a mention again, perhaps nobody has put this in perspective better than Owen Lamont, a portfolio manager of Acadian Asset Management, who in January 2024 wrote...
Unfortunately, in recent years the U.S. has been following Korea on the road to crazytown. What Americans call ‘meme stocks,’ Koreans call ‘theme stocks.’ These absurd objects have been a wacky feature of the Korean stock market since at least 2007.
In March of this year he went on to say...
Every market has an iconic group of retail investors who, like George Costanza, exemplify bad decisions leading to wealth destruction. In 1929, it was ‘the little fellow’ buying high leverage investment trusts traded on the NYSE. In 1989, it was Japanese salarymen speculating on land and stocks. In 1999, it was mutual fund investors buying growth funds. In 2021, it was Robinhood investors buying meme stocks. Today, perhaps that group is Korean retail investors.
Not perhaps, I’d argue – it is!

Speaking of speculation, there’s no question we’re in a bubble...
Good luck figuring out if and when it pops. Until then, here’s something to keep in mind from my pal John Hussmann of Hussmann Econometrics, who a month ago wrote...
The bubble hasn’t burst yet because investors haven’t quite yet recognized that the highest valuations in history imply the lowest expected future returns in history. A market crash is nothing but risk-aversion meeting a market that’s not priced to tolerate risk. Every fresh record high in valuations amplifies the likely downside when that occurs, but examining the collapse of past bubbles, the “catalyst” typically becomes evident only after the market is in free fall.
This month he added...
Division by zero is known as a 'singularity.' It’s the point where equations break down, values become 'indeterminate,' things stop working normally, and variables shoot toward infinity and suddenly collapse on the other side.

The current speculative bubble was driven by a singularity. Avoiding the crash on the other side relies on the willingness of investors to accept the lowest long-term return prospects in U.S. history, forever.

Extremely high prices may seem like a beautiful thing, but they’re a corrupt bargain. Unless you actually sell, the cost of “enjoying” record high valuations is that you are locking-in record low future rates of return.
Interpret at will...



jog on
duc
 
Just returning to this story from last week.


screenshot-2025-09-29-at-7-36-30-am-png.png


Consider:


screenshot-2025-09-29-at-7-39-53-am-png.png

Full:https://www.reuters.com/markets/cur...ajor-argentina-currency-swap-line-2025-04-08/


From a while back:

screenshot-2025-09-29-at-7-41-16-am-png.png


Full:https://www.weforum.org/stories/202...a and Chile are,million and Chile 9.6 million.


  • Around 60% of identified lithium is found in Latin America, with Bolivia, Argentina and Chile making up the ‘lithium triangle’.
  • Demand for lithium is predicted to grow 40-fold in the next two decades due to the energy transition to renewable power and electric vehicles.
  • However, there are concerns about the sustainability of water-intensive lithium mining in Latin America and elsewhere.

screenshot-2025-09-29-at-7-43-32-am-png.png


And as an added bonus:

screenshot-2025-09-29-at-7-50-47-am-png.png


400,000 barrels of oil.


So essentially, if the US had not stepped in, China would have, thereby gaining control of another stranglehold mineral and oil.


Just in case that was not sufficient reason:

What would happen to the markets if Argentina defaulted? Which Banks are on the hook? Would they fail? This scenario is unlikely because I assume China would step in for the 'loan to own' commodity benefits.


It's definitely a thing:


screenshot-2025-09-29-at-7-53-01-am-png.png
screenshot-2025-09-29-at-7-53-15-am-png.png

Full:https://www.cfr.org/backgrounder/ch...rgentina-brazil-venezuela-security-energy-bri


Loan-to-Own is how China is waging economic war on the US

screenshot-2025-09-29-at-7-55-03-am-png.png

Full:https://www.bloomberg.com/news/arti...up-to-2-billion-yuan-bonds-to-grow-china-ties

What's over there?

screenshot-2025-09-29-at-7-56-55-am-png.png

Amongst other stuff, Uranium.

Full:https://oec.world/en/profile/countr...t&selector1879id=usd&selector358id=tradeValue

Nuclear power is definitely on the comeback trail, so Uranium supplies are a good thing to control.


screenshot-2025-09-29-at-7-59-54-am-png.png


Full:https://world-nuclear.org/informati...ng-of-uranium/world-uranium-mining-production


The point being, Krugman and his ilk are so politically partisan and myopic, that reading what they have to say is seriously misleading and worthless blab.


All up to date.



jog on
duc
 
We have mostly green arrows around the world to start the week.

Europe is up small and Asia is broadly higher. The SPX held the 21-day last Thursday at 6569 and reclaimed the 8-day Friday to show the active bears have no power.

The U.S. economy remains resilient, supporting valuations. Plus there is AI optimism and a dovish Fed. Listen up today for talk about the potential of a government shutdown. Trim and trail into strength so you can move your feet and net money.

Tesla (TSLA) is my #1 focus today. We are adding a bullish position to Power Plays Options this morning for a very simple reason: I'm expecting record highs. The chart is forming a nice bull flag setup, and if it can get and stay above the $445 area, it's blue skies ahead.

NVDA is digesting well. It didn't have enough power to break out to highs last week. Some are long vs. $173ish. If it can get and stay above $180.26ish, it might wake up again.

DASH still acts well. It’s acting in line with the tape and held the 21-day near. November calls are on the radar for PPO.

JPM and the banks give clues on market complexion. This hit an all-time high of $317+ on Friday to show the market is still strong. The 8-day remains support.

I bought UMAC calls on last week’s dip because Washington may make new drone deals. $11.80 is now active support.


Chart of the Day for September 29, 2025

Today’s Featured Stock​

Valued at $4.61 billion, Mercury General (MCY) is engaged primarily in writing all risk classifications of automobile insurance. The company offers automobile policyholders the following types of coverage: bodily injury liability, underinsured and uninsured motorist, property damage liability, comprehensive, and collision.

What I’m Watching​

I found today’s Chart of the Day by using Barchart’s powerful screening functions to sort for stocks with the highest technical buy signals; superior current momentum in both strength and direction; and a Trend Seeker “buy” signal. I then used Barchart’s Flipcharts feature to review the charts for consistent price appreciation. MCY checks those boxes. Since the Trend Seeker signaled a "Buy" on Aug. 4 the stock gained 16.07%.
1673x649.png

Barchart Technical Indicators for Mercury General​

Editor’s Note: The technical indicators below are updated live during the session every 20 minutes and can therefore change each day as the market fluctuates. The indicator numbers shown below therefore may not match what you see live on the Barchart.com website when you read this report. These technical indicators form the Barchart Opinion on a particular stock.
Mercury hit an all-time high of $83.34 in morning trading on Sept. 29.

  • MCY has a Weighted Alpha of +51.08.
  • Mercury has an 100% “Buy” opinion from Barchart.
  • The stock gained 28.97% over the past year.
  • MCY has its Trend Seeker “Buy” signal intact.
  • Mercury is trading above its 20-, 50-, and 100-day moving averages.
  • The stock made 8 new highs and gained 5.46% in the last month.
  • Relative Strength Index (RSI) is at 73.53.
    There’s a technical support level around $82.37.

Don’t Forget the Fundamentals​

  • $4.61 billion market capitalization.
  • 14.35x trailing price-earnings ratio.
  • 1.52% dividend yield.
  • Revenue is projected to grow 8.92% this year and another 5.66% next year.
  • Earnings are estimated to decrease by 37.41% this year but increase again by 64.44% next year.

Analyst and Investor Sentiment on Mercury General​

I don’t buy stocks because everyone else is buying, but I do realize that if major firms and investors are dumping stock, it’s hard to make money swimming against the tide.

It looks like Wall Street analysts are high on MCY and so are individual investors.

  • The Wall Street analysts tracked by Barchart have issued 2 “Strong Buys” and 1 “Hold” opinion on the stock.
  • The average price target tracked by Barchart is $100.
  • Value Line gives the stock its “Highest” rating but with a price target of $95.
  • CFRA’s MarketScope Advisor rates it a “Buy.”
  • Morningstar thinks the stock is fairly valued.
  • 268 investors following the stock on Motley Fool think the stock will beat the market while 64 think it won’t.
  • 7,250 investors monitor the stock on Seeking Alpha, which rates the stock a “Strong Buy” and comments: “MCY’s strong growth and profitability, combined with positive momentum, justify its Strong Buy rating, despite some concerns over valuation relative to sector peers.”

The Bottom Line on Mercury General​


MCY appears to have backing not only from Wall Street, but also from other financial advisory sites and individual investors.



Following last Thursday's closing, the $406B discount store giant, Costco Wholesale $COST, beat its headline expectations and suffered a -3.23 reaction score on Friday.
The company posted revenues of $86.16B, versus the expected $86.01B, and earnings per share of $5.87, versus the expected $5.80.
Now let's dive into the fundamentals and technicals
Was this earnings report the catalyst for a new downtrend in COST?
93483717_image%20(2836)_01K662X29HTM5T648MS4Z9S1XP.png
Costco Wholesale had a -2.9% post-earnings reaction, and here's what happened:
  • Net sales increased 8% year-over-year, and this growth was reflected in the bottom line, with a 10.9% rise in net income over the same period.
  • Membership income, a key performance indicator for the company, surged 14% year-over-year. This came with a worldwide renewal rate of nearly 90%, much higher than the industry standard.
  • On the guidance front, the management team announced it plans to build 35 warehouses and dramatically increase its investment in future growth. While this is likely the right long-term move, the market is concerned about this in the short term due to the impact it will have on near-term cash flows.

As we highlighted in a recent Weekly Beat, this company has consistently grown its top and bottom lines, exceeding the market's headline expectations.

However, the earnings reactions have been anything but consistent. The past seven post-earnings reactions have been the following: drop, rally, drop, rally, drop, rally, drop. This reinforces the range-bound price action we've seen recently.

With the price now putting the finishing touches on a prolonged distribution pattern, we believe the bears are likely to take control of the primary trend.

The surge in capital expenditure is the fundamental catalyst for this new technical downtrend.
So long as COST holds below 920, the path of least resistance is likely to remain lower for the foreseeable future.




Key Takeaways​

  • Shorter-term US Treasury yields have fallen, while yields on longer-dated bonds could remain elevated, thanks to the threat of higher inflation and investor concerns surrounding the federal deficit.
  • Known as a steepening yield curve, this trend began ahead of the Fed’s first interest rate cut this year, and is expected to continue as rates are trimmed further.
  • A steeper curve means more opportunities to capture yield in longer-dated bonds, but it also comes with risks.
The landscape for bond investors is changing. Now that the Federal Reserve is cutting interest rates, strategists are expecting short-term Treasury bond yields to fall while yields on bonds with longer maturities stay high. In Wall Street lingo, that means the yield curve (a snapshot of the US Treasury market) is steepening.

The spread between the 10-year and two-year Treasury yields was 0.50 percentage points as of Friday, compared with 0.37 percentage points six months ago. It’s a big change from earlier this year, when yields across the bond market where relatively even. It’s a bigger change compared with the years following the pandemic, when short-term yields surpassed those on longer-term bonds.

For bond investors, a steeper curve provides opportunities to capture higher yields in longer-dated bonds. However, strategists say it’s important to understand the new risks that accompany that steeper curve. Cash and shorter-dated securities are subject to reinvestment risk—if yields fall, proceeds from those assets may not be able to be invested again at their original rate. On the other hand, longer-dated bonds could be subject to volatility, thanks to fluid fiscal and trade policy, an uncertain economic outlook, elevated inflation, and political pressure on the Fed.

Here’s what investors need to know.

What is the Yield Curve?​

The yield curve is a graphical representation of government-bond yields across different maturities, often from two-year Treasury notes to 10-year Treasury bonds. Bonds with longer maturities generally yield more than bonds with shorter maturities. That’s because investors expect extra compensation for the risks—like inflation and economic uncertainty—of locking their money up with the government for longer. The result is a curve that slopes upward.

From Inverted to Steepening​

In rare circumstances, shorter-term yields can climb higher than longer-term ones, which results in an inverted yield curve. That was the case for roughly two years beginning in the fall of 2022, when the Fed began aggressively hiking its benchmark interest rate to combat runaway inflation in the wake of the pandemic.

That unusually long inversion fueled spirited debate about whether a recession was in the cards, but the slowdown never materialized. Carol Schleif, chief market strategist at BMO Wealth Management, says the fundamentals never supported the conclusion that a recession was imminent. “The bond market actually has been very nuanced,” she says.


The curve un-inverted in the fall of 2024, when shorter-dated yields began dropping as investors anticipated the Fed’s first rate cut.

Market watchers saw the same phenomenon unfold this summer, as shorter-dated yields dropped ahead of the Fed’s September 2025 rate cut. “Everybody believed that the cut was coming,” says Dominic Pappalardo, chief multi-asset strategist for Morningstar Wealth.

Longer-term yields have edged down too, but not as much as shorter-term ones. On the longer end of the curve, investors have endured volatility amid an uncertain outlook for fiscal policy, inflation, and economic growth.

“You’re taught that bonds are supposed to be the stability in your portfolio, and yet yields have been all over the map, which means prices have been all over the map,” says BMO’s Schleif. When investors worry that inflation will be higher in the future, or are concerned that the government will struggle to pay its debt obligations, the bond market can get jittery and yields can climb.

“The common adage over history is that the Fed controls the yield curve two years and in, and the market controls the yield curve 10 years and longer,” Pappalardo explains. “That seems to be holding true over the last six months or so.”

More Steepening Ahead​

Analysts say that all signs point to more steepening ahead. With more rate cuts from the Fed likely, Pappalardo expects short-term yields to fall further. At the same time, it’s possible that longer-term yields will remain rangebound at their current levels—or climb even higher.
For one, inflation close to 3% doesn’t give long-term yields much room to fall, according to Kathy Jones, chief fixed income strategist at Schwab. On top of that, “Fed easing when inflation is too high—and possibly edging higher—just accelerates expectations that inflation could get worse. That’s keeping yields up,” she explains.

The tone of monetary policy is important too. “It matters why the Fed is cutting,” says BMO’s Schleif. “If the Fed’s cutting because growth is falling off a cliff, then those long yields tend to drift down. But if the Fed is cutting, either because it’s being politically pressured or because growth is super strong, that long end has a tendency to migrate up.”

Simmering worries about the federal deficit are also a factor. “If we have to keep issuing long US Treasury debt, at some point, people will demand higher yields to take on that debt,” Pappalardo says. Reduced foreign demand for US bonds could also drive prices lower and push long rates higher. Add it all together and “the longer end is going to have a hard time coming down,” he says. “None of those things seem to be going away in the next six months.”

How to Invest When the Yield Curve Steepens​

In general, strategists say falling short-term yields should be a signal to investors to reevaluate their holdings in cash and other short-dated assets. With more rate cuts on the horizon, those securities won’t be able to keep delivering the yield they did six months ago—a phenomenon called reinvestment risk.

“If you stay very short and rates go down, you’re reinvesting at lower and lower rates and your income stream shortens,” says Schwab’s Jones. With cash yielding less, investors can consider adding more duration to their portfolios. Duration is a measure of interest rate sensitivity, and longer-term bonds have greater durations than shorter-term ones.

Longer-duration bonds can offer more yield, but they also carry a different kind of risk, especially if rates keep rising and the value of those bonds falls. “If you need to cash in, [there’s a] good chance you’re going to have a price decline and a capital loss that will offset some of the income you’re earning,” Jones explains.

Morningstar’s Pappalardo cautions that it might not make sense for everyone to take more interest rate risk, but investors who want to maintain the current yield in their bond portfolio will need to extend maturity to do so.

For Jones, it’s all about balance: “In an environment like this, it doesn’t make sense to stay really short and it doesn’t make sense to be really long.” That’s why many bond pros have been advocating for a sweet spot in what’s known as the “belly” of the yield curve—bonds with maturities around five to seven years.

BMO’s Schleif says investors should be clear about why they’re holding bonds at all. Is it for stability and income, or to generate outsized returns? “For the most part, fixed income is the conservative ballast in the portfolio. If you’re going to be a risk taker, there are many places in the equity market and elsewhere to get that risk.” For investors who are also looking to capture a little extra return from bonds, Schleif says an opportunistic manager can help identify dislocations in the market.

Bond Fund Strategies for a Steepening Curve​

Investors who hold individual bonds to maturity don’t have to worry too much about fluctuating yields. Over time, they can generally expect to earn the starting yield on a bond when that bond matures, unless they sell it early. Within bond funds, however, portfolio managers often take a much more active approach to managing risk and yield under the hood, especially when the curve is steepening.
There are “infinite ways” to arrive at a fund’s duration target via combinations of bonds with different maturities, says Morningstar’s Pappalardo. He explains that the yield curve today is a little less steep compared with its historical average, and the spread between two- and 10-year bonds is currently a little lower than what Morningstar analysts consider its fair value. When that’s the case, Pappalardo says a portfolio manager might be a little more comfortable taking on some extra interest rate risk by adding duration on the longer end and offsetting it with more holdings on the shorter end, which could see price appreciation if yields fall. Bond prices move inversely with yields.
Managers can also employ a strategy called “rolling down the curve,” Schwab’s Jones explains. They can sell bonds on the steeper part of the curve as their maturities shorten, capturing some extra price return because the bond’s yield will now be comparable to bonds of shorter maturities that are further in on the curve. “It’s a nice way to generate some added return to the income stream,” she says, which helps fund managers offer a mix of yield and price appreciation to investors.





A couple of weeks ago, it was announced that OpenAI is going to invest up to $300 billion in Oracle’s cloud computing.

This week, Nvidia committed $100 billion of investments into OpenAI.

Oracle is spending billions of dollars on Nvidia’s GPUs.

Nvidia invests in OpenAI who then invests in Oracle who then invests in Nvidia and Finkle is Einhorn and Einhorn is Finkle.

We’ve reached the mutually assured destruction phase of the AI bubble where the tech giants have decided they’re all in this together. If one is going to take the risk on massive capital expenditures then they’re all going to take the risk.

And yeah, I’m ready to call this a bubble based purely on the history of excess investments in innovation.

During the dot-com bubble of the 1990s, the telecom companies laid down more than 80 million miles of fiber-optic cables. Five years after the bubble burst, 85% of these fiber-optic cables still remained unused.

The Nasdaq crashed more than 80%.

The Railway Bubble of the 1800s also comes to mind. Here are some facts and figures I found while researching Don’t Fall For It:

  • There were 500 new railway companies by 1845
  • That same year, the Board of Trade was considering some 8,000 miles of new track in Great Britain alone, almost 20x the length of England.
  • The cost of the buildout was more than the national income of the entire country.
  • There were 14 bi-weekly newsletters about the railroad industry in circulation.
  • Charles Darwin got caught up in the bubble, losing 60% of his investment.
The good news is both of those bubbles were great for innovation.

By 1855, there were over 8,000 miles of railroad track in operation, giving Britain the highest density of railroad tracks in the world, measuring seven times the length of France or Germany. The telecomm bubble helped power YouTube, social media, streaming movies, video calls, and everything else people dreamed about in the 1990s and more.

There are some similarities to the current AI buildout but many differences too.

The dot-com bubble was fueled by investor speculation in immature companies that didn’t generate any profits. Today’s tech firms are printing cash flow with insanely high margins.

Nearly all the money for the railways came from individuals. Retail investors were fueling the bubble.

The AI boom is coming from inside the house. It’s being led by the tech CEOs who are making these capital allocation decisions.

In the 1990s, Bill Gates said:

Gold rushes tend to encourage impetuous investments. A few will pay off, but when the frenzy is behind us, we will look back incredulously at the wreckage of failed ventures and wonder, ‘Who funded those companies? What was going on in their minds? Was that just a mania at work?’

Here’s what Mark Zuckerberg said in an interview recently:

If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate obviously. But what I’d say is I actually think the risk is higher on the other side. If you if you build too slowly and then super intelligence is possible in three years, but you built it out assuming it would be there in five years, then you’re just out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation and history.

In other words — we’re not going to undershoot on this. If it turns into a mania, so be it.

These tech leaders aren’t stupid. They know the history of over-investment. But they’re saying the risk comes from not spending enough.

So case closed? This is a bubble that’s sure to pop?

If this truly is a bubble of epic proportions it’s one of the weirdest ones we’ve ever seen.

According to The Wall Street Journal, there is now $7.7 trillion sitting in money market funds:

Screenshot-2025-09-23-141600.png
It’s a bull market in cash holdings.

Gold is up more than 40% this year alone and hitting new all-time highs at a healthy clip. Since ChatGPT was released in November 2022, gold is actually outperforming the Nasdaq 100:

GLD_QQQ_chart.png
How could a relic that’s been used for thousands of years outperform the biggest, baddest technology companies we’ve ever seen during an orgy of AI spending?

The other part that makes the current situation tricky to understand is the companies leading the charge in the AI bubble have the fundamentals to back it up. JP Morgan’s Michael Cembalest shared the following in a new research piece this week:

AI related stocks have accounted for 75% of S&P 500 returns, 80% of earnings growth and 90% of capital spending growth since ChatGPT launched in November 2022.

These companies are spending like drunken sailors but they can all afford the booze!

I understand why many investors are worried about the prospects of a bubble. When they burst it tends to be painful. If you’re invested in the market, you have plenty of exposure to the gigantic tech stocks:

mag7-market-cap-weight-since-2021-scaled.png


Just because this feels like some of history’s biggest bubbles doesn’t make it any easier to handicap.

The thing that worries me the most right now is everyone who has ever studied market history is now calling this a bubble. It seems so obvious. Markets are rarely that easy.

So what if you’re convinced we’re in a bubble? What actions should you take?

I’ll share some thoughts on this topic next week.

In the meantime, Michael and I talked dissected the AI bubble from all angles and much more on this week’s Animal Spirits:






Screenshot 2025-09-30 at 3.45.45 AM.png


Full:https://www.theatlantic.com/economy...opy-link&utm_medium=social&utm_campaign=share



Screenshot 2025-09-30 at 3.47.28 AM.png


Full:https://download.ssrn.com/nber/nber...413492da7df6db0ac6991641a7&abstractId=5515863



Now consider this:




China is making and installing factory robots at a far greater pace than any other country, with the United States a distant third, further strengthening China’s already dominant global role in manufacturing.

There were more than two million robots working in Chinese factories last year, according to a report released Thursday by the International Federation of Robotics, a nonprofit trade group for makers of industrial robots. Factories in China installed nearly 300,000 new robots last year, more than the rest of the world combined, the report found. American factories installed 34,000.

While Chinese factories have been using more robots, they have also gotten better at making them. The government has used public capital and policy directives to spur Chinese companies to become leaders in robotics and other advanced technologies like semiconductors and artificial intelligence.

Worldwide, robots and artificial intelligence are playing an increasingly prominent and disruptive role in manufacturing. Factory robots range from machines that weld car parts together to claws that lifts boxes onto conveyor belts. As technology helps factories become more efficient, some are making do with fewer workers and altering the roles of others.

Over the past decade, China has embarked on a broad campaign to use more robots in its factories, become a major maker of robots and combine the industry with advances in artificial intelligence.
Screenshot 2025-09-30 at 3.53.38 AM.png


There have been all manner of reasons promulgated for China's enhanced use of robots:

(i) Poor demographics, which has some truth, but then pretty much every economy currently has poor demographics. The primary issue however for the West is that Chinese robot workers are MUCH CHEAPER and hence far more productive than Western economies. China as a manufacturing base is eating the West's lunch.

(ii) China is a currency manipulator. True. But so is every other economy out there. But here (again) is the rub: while the US aided and abetted by the UK spent +/- $8 Trillion on wars against 'Terror' etc., China spent money on productive infrastructure and training a workforce (robots) that provide a return on investment. The US wars have created a liability in veterans payments.

(iii) AI bubble. Lots of articles about whether there is an AI bubble. In China there is no AI bubble. What they have is an investment in AI combined with robotics (are they the same thing?) that is driving a Ferrari of manufacturing.


All-in-all China's productivity is swamping the US. It also allows China to reset the monetary system replacing UST for gold. The USD is +/- x20 overvalued against the Yuan or x20 overvalued as against gold.


Meanwhile, US economists write:



Key Takeaways

  • Growth Helps, But Isn’t Enough: Even ambitious growth policies cannot fully stabilize federal debt without direct fiscal measures.
  • Regulatory Reforms Show Promise: Changes that boost growth without increasing deficits—like regulatory streamlining—are especially valuable.
  • Evidence Gaps Remain: More research is needed on the real-world effects of growth-enhancing policies, particularly their impact on debt sustainability.
  • A Broader Fiscal Strategy Is Essential: Policymakers should view growth-oriented reforms as one component of a comprehensive approach that includes tax and spending adjustments1
Conclusion

Growth-oriented policies can play a meaningful role in reducing federal deficits, but they are not a standalone solution. Direct fiscal adjustments remain unavoidable, and a broader evidence base is needed to guide effective policy design for both economic growth and fiscal sustainability.

Larry Swedroe is the author or co-author of 18 books on investing, including his latest Enrich Your Future. He is also a consultant to RIAs as an educator on investment strategies.

Full:https://larryswedroe.substack.com/p/can-economic-growth-alone-fix-the


jog on
duc
 
Now this is for the traders out there that love complicated:


Screenshot 2025-09-30 at 4.20.31 AM.png


Hello everyone,

It has been a while since I posted an indicator, so thought I would share this project I did for fun.

This indicator is an attempt to develop a pseudo Random Forest classification decision matrix model for Pinescript.
This is not a full, robust Random Forest model by any stretch of the imagination, but it is a good way to showcase how decision matrices can be applied to trading and within Pinescript.

As to not market this as something it is not, I am simply calling it the "Simple Decision Matrix Classification Algorithm". However, I have stolen most of the aspects of this machine learning algo from concepts of Random Forest modelling.

How it works:

With models like Support Vector Machines (SVM), Random Forest (RF) and Gradient Boosted Machine Learning (GBM), which are commonly used in Machine Learning Classification Tasks (MLCTs), this model operates similarity to the basic concepts shared amongst those modelling types. While it is not very similar to SVM, it is very similar to RF and GBM, in that it uses a "voting" system.

What do I mean by voting system?

How most classification MLAs work is by feeding an input dataset to an algorithm. The algorithm sorts this data, categorizes it, then introduces something called a confusion matrix (essentially sorting the data in no apparently order as to prevent over-fitting and introduce "confusion" to the algorithm to ensure that it is not just following a trend).
From there, the data is called upon based on current data inputs (so say we are using RSI and Z-Score, the current RSI and Z-Score is compared against other RSI's and Z-Scores that the model has saved). The model will process this information and each "tree" or "node" will vote. Then a cumulative overall vote is casted.

How does this MLA work?

This model accepts 2 independent variables. In order to keep things simple, this model was kept as a three node model. This means that there are 3 separate votes that go in to get the result. A vote is casted for each of the two independent variables and then a cumulative vote is casted for the overall verdict (the result of the model's prediction).

The model actually displays this system diagrammatically and it will likely be easier to understand if we look at the diagram to ground the example:

snapshot

In the diagram, at the very top we have the classification variable that we are trying to predict. In this case, we are trying to predict whether there will be a breakout/breakdown outside of the normal ATR range (this is either yes or no question, hence a classification task).

So the question forms the basis of the input. The model will track at which points the ATR range is exceeded to the upside or downside, as well as the other variables that we wish to use to predict these exceedences. The ATR range forms the basis of all the data flowing into the model.

Then, at the second level, you will see we are using Z-Score and RSI to predict these breaks. The circle will change colour according to "feature importance". Feature importance basically just means that the indicator has a strong impact on the outcome. The stronger the importance, the more green it will be, the weaker, the more red it will be.

We can see both RSI and Z-Score are green and thus we can say they are strong options for predicting a breakout/breakdown.

So then we move down to the actual voting mechanisms. You will see the 2 pink boxes. These are the first lines of voting. What is happening here is the model is identifying the instances that are most similar and whether the classification task we have assigned (remember out ATR exceedance classifier) was either true or false based on RSI and Z-Score.
These are our 2 nodes. They both cast an individual vote. You will see in this case, both cast a vote of 1. The options are either 1 or 0. A vote of 1 means "Yes" or "Breakout likely".

However, this is not the only voting the model does. The model does one final vote based on the 2 votes. This is shown in the purple box. We can see the final vote and result at the end with the orange circle. It is 1 which means a range exceedance is anticipated and the most likely outcome.

The Data Table Component

The model has many moving parts. I have tried to represent the pivotal functions diagrammatically, but some other important aspects and background information must be obtained from the companion data table.

If we bring back our diagram from above:
snapshot

We can see the data table to the left.
The data table contains 2 sections, one for each independent variable. In this case, our independent variables are RSI and Z-Score.
The data table will provide you with specifics about the independent variables, as well as about the model accuracy and outcome.

If we take a look at the first row, it simply indicates which independent variable it is looking at. If we go down to the next row where it reads "Weighted Impact", we can see a corresponding percent. The "weighted impact" is the amount of representation each independent variable has within the voting scheme. So in this case, we can see its pretty equal, 45% and 55%, This tells us that there is a slight higher representation of z-score than RSI but nothing to worry about.

If there was a major over-respresentation of greater than 30 or 40%, then the model would risk being skewed and voting too heavily in favour of 1 variable over the other.

If we move down from there we will see the next row reads "independent accuracy". The voting of each independent variable's accuracy is considered separately. This is one way we can determine feature importance, by seeing how well one feature augments the accuracy. In this case, we can see that RSI has the greatest importance, with an accuracy of around 87% at predicting breakouts. That makes sense as RSI is a momentum based oscillator.

Then if we move down one more, we will see what each independent feature (node) has voted for. In this case, both RSI and Z-Score voted for 1 (Breakout in our case).
You can weigh these in collaboration, but its always important to look at the final verdict of the model, which if we move down, we can see the "Model prediction" which is "Bullish".

If you are using the ATR breakout, the model cannot distinguish between "Bullish" or "Bearish", must that a "Breakout" is likely, either bearish or bullish. However, for the other classification tasks this model can do, the results are either Bullish or Bearish.

Using the Function:
Okay so now that all that technical stuff is out of the way, let's get into using the function. First of all this function innately provides you with 3 possible classification tasks. These include:

1. Predicting Red or Green Candle
2. Predicting Bullish / Bearish ATR
3. Predicting a Breakout from the ATR range

The possible independent variables include:
1. Stochastics,
2. MFI,
3. RSI,
4. Z-Score,
5. EMAs,
6. SMAs,
7. Volume

The model can only accept 2 independent variables, to operate within the computation time limits for pine execution.

Let's quickly go over what the numbers in the diagram mean:
snapshot

The numbers being pointed at with the yellow arrows represent the cases the model is sorting and voting on. These are the most identical cases and are serving as the voting foundation for the model.

The numbers being pointed at with the pink candle is the voting results.

Extrapolating the functions (For Pine Developers:

So this is more of a feature application, so feel free to customize it to your liking and add additional inputs. But here are some key important considerations if you wish to apply this within your own code:

1. This is a BINARY classification task. The prediction must either be 0 or 1.
2. The function consists of 3 separate functions, the 2 first functions serve to build the confusion matrix and then the final "random_forest" function serves to perform the computations. You will need all 3 functions for implementation.
3. The model can only accept 2 independent variables.


I believe that is the function. Hopefully this wasn't too confusing, it is very statsy, but its a fun function for me! I use Random Forest excessively in R and always like to try to convert R things to Pinescript.

Hope you enjoy!


LOL.

Now for the money shot!




Pine Script®

// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// /$$$$$$ /$$ /$$
// /$$__ $$ | $$ | $$
//| $$ \__//$$$$$$ /$$$$$$ /$$ /$$ /$$$$$$ /$$$$$$ /$$$$$$$ /$$$$$$ /$$$$$$ /$$ /$$ /$$$$$$ /$$$$$$$
//| $$$$$$|_ $$_/ /$$__ $$| $$ /$$//$$__ $$ /$$__ $$ /$$_____/|_ $$_/ /$$__ $$| $$ /$$//$$__ $$ /$$_____/
// \____ $$ | $$ | $$$$$$$$ \ $$/$$/| $$$$$$$$| $$ \__/| $$$$$$ | $$ | $$$$$$$$ \ $$/$$/| $$$$$$$$| $$$$$$
// /$$ \ $$ | $$ /$$| $$_____/ \ $$$/ | $$_____/| $$ \____ $$ | $$ /$$| $$_____/ \ $$$/ | $$_____/ \____ $$
//| $$$$$$/ | $$$$/| $$$$$$$ \ $/ | $$$$$$$| $$ /$$$$$$$/ | $$$$/| $$$$$$$ \ $/ | $$$$$$$ /$$$$$$$/
// \______/ \___/ \_______/ \_/ \_______/|__/ |_______/ \___/ \_______/ \_/ \_______/|_______/

//@version=6
indicator("Simple Decision Matrix Classification Algorithm [SS]", shorttitle = "DMCA [SS]", overlay=false)
g1 = "Model Parameters",
g2 = "EMA"
g3 = "RSI"
g4 = "MFI"
g5 = "Z-Score"
g6 = "ATR"
training = input.int(850, "Training Length", group = g1)
select_x1 = input.string("Stochastic", "Select Independent Variable X1", ["Stochastic", "RSI", "Z-Score", "MFI", "Volume", "EMA", "SMA"], group = g1)
select_x2 = input.string("Volume", "Select Independent Variable X1", ["Stochastic", "RSI", "Z-Score", "MFI", "Volume", "EMA", "SMA"], group = g1)
select_y = input.string("Red/Green Candle", "Select desired target Classification Prediction", ["Red/Green Candle", "Bullish / Bearish ATR", "ATR Range Breakout"], group = g1)
showtable = input.bool(true, "Show Table", group = g1)
// EMA
emalen = input.int(14, "EMA/SMA Length", group = g2), emasrc = input.source(close, "EMA/ SMA Source", group = g2)
// RSI
rsilen = input.int(14, "RSI Length", group = g3), rsisrc = input.source(close, "RSI Source", group = g3)
// MFI
mfilen = input.int(14, "MFI Length", group = g4), mfisrc = input.source(hlc3, "MFI Source", group = g4)
// Z-Score
zlen = input.int(14, "Z-Score Length", group = g5), zsrc = input.source(close, "Z-Score Source", group = g5)
// atr
atrlen = input.int(14, "ATR Length", group = g6)



atr = ta.atr(atrlen)
sent = close > open ? 1 : 0
sentiment = high - open > open - low ? 1 : 0
rsi = ta.rsi(rsisrc, rsilen)
sto = ta.stoch(close, high, low, 14)
mfi = ta.mfi(mfisrc, mfilen)
z = (zsrc - ta.sma(zsrc,zlen)) / ta.stdev(zsrc, zlen)
ema = ta.ema(emasrc, emalen)
sma = ta.sma(emasrc, emalen)
atr_breakout = high-open > atr or open-low < atr ? 1 : 0

float x1_input = switch select_x1
"Stochastic" => sto
"RSI" => rsi
"Z-Score" => z
"MFI" => mfi
"Volume" => volume
"EMA" => close - ema
"SMA" => close - sma
float x2_input = switch select_x2
"Stochastic" => sto
"RSI" => rsi
"Z-Score" => z
"MFI" => mfi
"Volume" => volume
"EMA" => close - ema
"SMA" => close - sma
float y_input = switch select_y
"Red/Green Candle" => sent
"Bullish / Bearish ATR" => sentiment
"ATR Range Breakout" => atr_breakout

float x1_threshold = 0
float x2_threshold = 0
if select_x1 == "Volume"
x1_threshold := ta.stdev(volume, 14)
else if select_x1 == "Z-Score"
x1_threshold := 0.05
else
x1_threshold := 0.5

if select_x2 == "Volume"
x2_threshold := ta.stdev(volume, 14)
else if select_x2 == "Z-Score"
x2_threshold := 0.05
else
x2_threshold := 0.5
// Function to shuffle a list of indices
shuffle_indices(n) =>
indices = array.new_int(n)
for i = 0 to n - 1
array.set(indices, i, i)
for i = 0 to n - 1
rand_index = math.floor(math.random(0, n))
temp = array.get(indices, i)
array.set(indices, i, array.get(indices, rand_index))
array.set(indices, rand_index, temp)
indices

// Function to shuffle the rows of a matrix
shuffle_matrix(matrix, rows) =>
shuffled_matrix = matrix.new<float>(rows, matrix.columns(matrix), 0)
indices = shuffle_indices(rows)
for i = 0 to rows - 1
for j = 0 to matrix.columns(matrix) - 1
matrix.set(shuffled_matrix, i, j, matrix.get(matrix, array.get(indices, i), j))
shuffled_matrix

radnom_forest_classification(x1, x2, y, train) =>
x1m = matrix.new<float>(train , 2, 0)
x2m = matrix.new<float>(train, 2, 0)
lx1 = x1[1]
lx2 = x2[1]
ly = y
for i = 0 to train - 1
matrix.set(x1m, i, 0, lx1)
matrix.set(x1m, i, 1, ly)
matrix.set(x2m, i, 0, lx2)
matrix.set(x2m, i, 1, ly)
x1m_random = shuffle_matrix(x1m, train)
x2m_random = shuffle_matrix(x2m, train)
int y1_pass = 0
int y1_fail = 0
for i = 0 to train - 2
if matrix.get(x1m_random, i, 0) > 0
if matrix.get(x1m_random, i, 0) >= x1 - x1_threshold and matrix.get(x1m_random, i, 0) <= x1 + x1_threshold
if matrix.get(x1m_random, i, 1) == 1
y1_pass += 1
else if matrix.get(x1m_random, i, 1) != 1
y1_fail += 1
int y2_pass = 0
int y2_fail = 0
int x1_vote = y1_pass > y1_fail ? 1 : 0
for i = 0 to train - 2
if matrix.get(x2m_random, i, 0) >= x2 - x2_threshold and matrix.get(x2m_random, i, 0) <= x2 + x2_threshold
if matrix.get(x2m_random, i, 1) == 1
y2_pass += 1
else if matrix.get(x2m_random, i, 1) != 1
y2_fail += 1
passes = y1_pass + y2_pass

fails = y1_fail + y2_fail
vote = passes > fails ? 1 : 0
x2_vote = y2_pass > y2_fail ? 1 : 0
// X1 independent accuracy
int x1_ia_pass = 0
int x1_ia_fail = 0
int x2_ia_pass = 0
int x2_ia_fail = 0
x2_vote_pass = x2_vote == y
x1_vote_pass = x1_vote == y
for i = 0 to train
if x1_vote_pass
x1_ia_pass += 1
else
x1_ia_fail += 1
if x2_vote_pass
x2_ia_pass += 1
else
x2_ia_fail += 1

x1_accuracy = x1_ia_pass / (x1_ia_pass + x1_ia_fail) * 100
x2_accuracy = x2_ia_pass / (x2_ia_pass + x2_ia_fail) * 100
// Accuracy
int success = 0
int failure = 0
for i = 0 to train - 1
if vote == y
success += 1
else if vote != y
failure += 1
accurcay = success / (success + failure) * 100

[vote, accurcay, y1_pass, y1_fail, y2_pass, y2_fail, x1_vote, x2_vote, x1_accuracy, x2_accuracy]


[result, acc, y1p, y1f, y2p, y2f, x1v, x2v, x1a, x2a] = radnom_forest_classification(x1_input, x2_input, y_input, training)

// Demographics

total_sorted_cases_x1 = y1p + y1f
total_sorted_cases_x2 = y2p + y2f
x1_voted_result = y1p > y1f ? 1 : 0
x2_voted_result = y2p > y2f ? 1 : 0
total_cases = total_sorted_cases_x1 + total_sorted_cases_x2
x1_total_weight = (total_sorted_cases_x1 / total_cases) * 100
x2_total_weight = (total_sorted_cases_x2 / total_cases) * 100

string_Result = result == 1 ? "Bullish" : "Bearish"

if barstate.islast
for lin in line.all
line.delete(lin)
for txt in label.all
label.delete(txt)

// Feature Importance

feature_importance_color(current_variable, other_variable) =>
color feature_color = na
if current_variable > other_variable
if current_variable <= 45
feature_color := color.rgb(5, 46, 7)
else if current_variable > 45 and current_variable <= 55
feature_color := color.rgb(10, 99, 14)
else if current_variable > 55
feature_color := color.rgb(5, 235, 15)
else if other_variable > current_variable
if current_variable > 50 and current_variable <= 55
feature_color := color.rgb(22, 110, 26)
else if current_variable > 55
feature_color := color.rgb(5, 235, 15)
else if current_variable <= 50 and current_variable > 45
feature_color := color.rgb(89, 6, 31)
else if current_variable <= 45
feature_color := color.rgb(247, 0, 75)
feature_color
if barstate.islast
label.new(bar_index - 200, y = 20, text = str.tostring(select_x1), color = feature_importance_color(x1a, x2a), style = label.style_circle, size = size.large)
label.new(bar_index - 250, y = 10, text = str.tostring(y1p), color = color.lime, style = label.style_circle, size = size.large)
label.new(bar_index - 150, y = 10, text = str.tostring(y1f), color = color.red, style = label.style_circle, size = size.large)
label.new(bar_index - 200, y = 0, text = str.tostring(x1v), color = color.purple, style = label.style_circle, size = size.large)
line.new(bar_index - 200, y1 = 20, x2 = bar_index - 250, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 250, y1 = 10, x2 = bar_index - 200, y2 = 0, color = color.gray, width = 3)
line.new(bar_index - 200, y1 = 20, x2 = bar_index - 150, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 150, y1 = 10, x2 = bar_index - 200, y2 = 0, color = color.gray, width = 3)
line.new(bar_index, y1 = 20, x2 = bar_index - 50, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 50, y1 = 10, x2 = bar_index, y2 = 0, color = color.gray, width = 3)
line.new(bar_index, y1 = 20, x2 = bar_index + 50, y2 = 10, color = color.gray, width = 3)
line.new(bar_index + 50, y1 = 10, x2 = bar_index, y2 = 0, color = color.gray, width = 3)
label.new(bar_index, y = 20, text = str.tostring(select_x2), color = feature_importance_color(x2a, x1a), style = label.style_circle, size = size.large)
label.new(bar_index - 50, y = 10, text = str.tostring(y2p), color = color.lime, style = label.style_circle, size = size.large)
label.new(bar_index + 50, y = 10, text = str.tostring(y2f), color = color.red, style = label.style_circle, size = size.large)
label.new(bar_index, y = 0, text = str.tostring(x2v), color = color.purple, style = label.style_circle, size = size.large)
label.new(bar_index - 100, y = - 10, text = str.tostring(result), color = color.orange, style = label.style_circle, size = size.large)
line.new(bar_index - 200, y1 = 0, x2 = bar_index - 100, y2 = -10, color = color.gray, width = 3)
line.new(bar_index, y1 = 0, x2 = bar_index - 100, y2 = -10, color = color.gray, width = 3)
label.new(bar_index - 100, y = 30, text = str.tostring(select_y), color = color.navy, style = label.style_circle, size = size.large)
line.new(bar_index - 100, y1 = 30, x2 = bar_index - 200, y2 = 20, color = color.gray, width = 3)
line.new(bar_index - 100, y1 = 30, x2 = bar_index, y2 = 20, color = color.gray, width = 3)

data = table.new(position.middle_left, 10, 10, bgcolor = color.rgb(0, 0, 0), frame_color = color.white, frame_width = 3)

if showtable
table.merge_cells(data, 1, 0, 2, 0)
table.merge_cells(data, 3, 0, 4, 0)
table.cell(data, 1, 0, text = str.tostring(select_x1), bgcolor = color.new(color.gray, 75), text_color = color.white, text_halign = text.align_center)
table.cell(data, 3, 0, text = str.tostring(select_x2), bgcolor = color.new(color.gray, 75), text_color = color.white, text_halign = text.align_center)
table.cell(data, 1, 1, text = "X1 Independent: ", text_color = color.white)
table.cell(data, 2, 1, text = str.tostring(select_x1), text_color = color.white)
table.cell(data, 1, 2, text = "X1 Total Sorted Cases: ", text_color = color.white)
table.cell(data, 2, 2, text = str.tostring(total_sorted_cases_x1), text_color = color.white)
table.cell(data, 1, 2, text = "X1 Weighted Impact: ", text_color = color.white)
table.cell(data, 2, 2, text = str.tostring(math.round(x1_total_weight, 2)), text_color = color.white)
table.cell(data, 1, 3, text = "X1 Independent Accuracy: ", text_color = color.white)
table.cell(data, 2, 3, text = str.tostring(math.round(x1a, 2)) + "%", text_color = color.white)
table.cell(data, 1, 4, text = "X1 Voted Result: ", text_color = color.white)
table.cell(data, 2, 4, text = str.tostring(x1_voted_result), text_color = color.white)
table.cell(data, 3, 1, text = "X2 Independent: ", text_color = color.white)
table.cell(data, 4, 1, text = str.tostring(select_x2), text_color = color.white)
table.cell(data, 3, 2, text = "X2 Total Sorted Cases: ", text_color = color.white)
table.cell(data, 4, 2, text = str.tostring(total_sorted_cases_x2), text_color = color.white)
table.cell(data, 3, 2, text = "X2 Weighted Impact: ", text_color = color.white)
table.cell(data, 4, 2, text = str.tostring(math.round(x2_total_weight, 2)), text_color = color.white)
table.cell(data, 3, 3, text = "X2 Independent Accuracy: ", text_color = color.white)
table.cell(data, 4, 3, text = str.tostring(math.round(x2a, 2)) + "%", text_color = color.white)
table.cell(data, 3, 4, text = "X2 Voted Result: ", text_color = color.white)
table.cell(data, 4, 4, text = str.tostring(x2_voted_result), text_color = color.white)
table.merge_cells(data, 1, 5, 4, 5)
table.cell(data, 1, 5, text = "Model Accuracy: " + str.tostring(math.round(acc, 2)), text_color = color.white, text_halign = text.align_center)
table.merge_cells(data, 1, 6, 4, 6)
table.cell(data, 1, 6, text = "Model Prediction: " + str.tostring(string_Result), text_color = color.white, text_halign = text.align_center)



WTF.

I'm hoping @qldfrog can make something of this.



jog on
duc
 
Now this is for the traders out there that love complicated:


View attachment 209789


Hello everyone,

It has been a while since I posted an indicator, so thought I would share this project I did for fun.

This indicator is an attempt to develop a pseudo Random Forest classification decision matrix model for Pinescript.
This is not a full, robust Random Forest model by any stretch of the imagination, but it is a good way to showcase how decision matrices can be applied to trading and within Pinescript.

As to not market this as something it is not, I am simply calling it the "Simple Decision Matrix Classification Algorithm". However, I have stolen most of the aspects of this machine learning algo from concepts of Random Forest modelling.

How it works:

With models like Support Vector Machines (SVM), Random Forest (RF) and Gradient Boosted Machine Learning (GBM), which are commonly used in Machine Learning Classification Tasks (MLCTs), this model operates similarity to the basic concepts shared amongst those modelling types. While it is not very similar to SVM, it is very similar to RF and GBM, in that it uses a "voting" system.

What do I mean by voting system?

How most classification MLAs work is by feeding an input dataset to an algorithm. The algorithm sorts this data, categorizes it, then introduces something called a confusion matrix (essentially sorting the data in no apparently order as to prevent over-fitting and introduce "confusion" to the algorithm to ensure that it is not just following a trend).
From there, the data is called upon based on current data inputs (so say we are using RSI and Z-Score, the current RSI and Z-Score is compared against other RSI's and Z-Scores that the model has saved). The model will process this information and each "tree" or "node" will vote. Then a cumulative overall vote is casted.

How does this MLA work?

This model accepts 2 independent variables. In order to keep things simple, this model was kept as a three node model. This means that there are 3 separate votes that go in to get the result. A vote is casted for each of the two independent variables and then a cumulative vote is casted for the overall verdict (the result of the model's prediction).

The model actually displays this system diagrammatically and it will likely be easier to understand if we look at the diagram to ground the example:

snapshot

In the diagram, at the very top we have the classification variable that we are trying to predict. In this case, we are trying to predict whether there will be a breakout/breakdown outside of the normal ATR range (this is either yes or no question, hence a classification task).

So the question forms the basis of the input. The model will track at which points the ATR range is exceeded to the upside or downside, as well as the other variables that we wish to use to predict these exceedences. The ATR range forms the basis of all the data flowing into the model.

Then, at the second level, you will see we are using Z-Score and RSI to predict these breaks. The circle will change colour according to "feature importance". Feature importance basically just means that the indicator has a strong impact on the outcome. The stronger the importance, the more green it will be, the weaker, the more red it will be.

We can see both RSI and Z-Score are green and thus we can say they are strong options for predicting a breakout/breakdown.

So then we move down to the actual voting mechanisms. You will see the 2 pink boxes. These are the first lines of voting. What is happening here is the model is identifying the instances that are most similar and whether the classification task we have assigned (remember out ATR exceedance classifier) was either true or false based on RSI and Z-Score.
These are our 2 nodes. They both cast an individual vote. You will see in this case, both cast a vote of 1. The options are either 1 or 0. A vote of 1 means "Yes" or "Breakout likely".

However, this is not the only voting the model does. The model does one final vote based on the 2 votes. This is shown in the purple box. We can see the final vote and result at the end with the orange circle. It is 1 which means a range exceedance is anticipated and the most likely outcome.

The Data Table Component

The model has many moving parts. I have tried to represent the pivotal functions diagrammatically, but some other important aspects and background information must be obtained from the companion data table.

If we bring back our diagram from above:
snapshot

We can see the data table to the left.
The data table contains 2 sections, one for each independent variable. In this case, our independent variables are RSI and Z-Score.
The data table will provide you with specifics about the independent variables, as well as about the model accuracy and outcome.

If we take a look at the first row, it simply indicates which independent variable it is looking at. If we go down to the next row where it reads "Weighted Impact", we can see a corresponding percent. The "weighted impact" is the amount of representation each independent variable has within the voting scheme. So in this case, we can see its pretty equal, 45% and 55%, This tells us that there is a slight higher representation of z-score than RSI but nothing to worry about.

If there was a major over-respresentation of greater than 30 or 40%, then the model would risk being skewed and voting too heavily in favour of 1 variable over the other.

If we move down from there we will see the next row reads "independent accuracy". The voting of each independent variable's accuracy is considered separately. This is one way we can determine feature importance, by seeing how well one feature augments the accuracy. In this case, we can see that RSI has the greatest importance, with an accuracy of around 87% at predicting breakouts. That makes sense as RSI is a momentum based oscillator.

Then if we move down one more, we will see what each independent feature (node) has voted for. In this case, both RSI and Z-Score voted for 1 (Breakout in our case).
You can weigh these in collaboration, but its always important to look at the final verdict of the model, which if we move down, we can see the "Model prediction" which is "Bullish".

If you are using the ATR breakout, the model cannot distinguish between "Bullish" or "Bearish", must that a "Breakout" is likely, either bearish or bullish. However, for the other classification tasks this model can do, the results are either Bullish or Bearish.

Using the Function:
Okay so now that all that technical stuff is out of the way, let's get into using the function. First of all this function innately provides you with 3 possible classification tasks. These include:

1. Predicting Red or Green Candle
2. Predicting Bullish / Bearish ATR
3. Predicting a Breakout from the ATR range

The possible independent variables include:
1. Stochastics,
2. MFI,
3. RSI,
4. Z-Score,
5. EMAs,
6. SMAs,
7. Volume

The model can only accept 2 independent variables, to operate within the computation time limits for pine execution.

Let's quickly go over what the numbers in the diagram mean:
snapshot

The numbers being pointed at with the yellow arrows represent the cases the model is sorting and voting on. These are the most identical cases and are serving as the voting foundation for the model.

The numbers being pointed at with the pink candle is the voting results.

Extrapolating the functions (For Pine Developers:

So this is more of a feature application, so feel free to customize it to your liking and add additional inputs. But here are some key important considerations if you wish to apply this within your own code:

1. This is a BINARY classification task. The prediction must either be 0 or 1.
2. The function consists of 3 separate functions, the 2 first functions serve to build the confusion matrix and then the final "random_forest" function serves to perform the computations. You will need all 3 functions for implementation.
3. The model can only accept 2 independent variables.


I believe that is the function. Hopefully this wasn't too confusing, it is very statsy, but its a fun function for me! I use Random Forest excessively in R and always like to try to convert R things to Pinescript.

Hope you enjoy!


LOL.

Now for the money shot!




Pine Script®

// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// /$$$$$$ /$$ /$$
// /$$__ $$ | $$ | $$
//| $$ \__//$$$$$$ /$$$$$$ /$$ /$$ /$$$$$$ /$$$$$$ /$$$$$$$ /$$$$$$ /$$$$$$ /$$ /$$ /$$$$$$ /$$$$$$$
//| $$$$$$|_ $$_/ /$$__ $$| $$ /$$//$$__ $$ /$$__ $$ /$$_____/|_ $$_/ /$$__ $$| $$ /$$//$$__ $$ /$$_____/
// \____ $$ | $$ | $$$$$$$$ \ $$/$$/| $$$$$$$$| $$ \__/| $$$$$$ | $$ | $$$$$$$$ \ $$/$$/| $$$$$$$$| $$$$$$
// /$$ \ $$ | $$ /$$| $$_____/ \ $$$/ | $$_____/| $$ \____ $$ | $$ /$$| $$_____/ \ $$$/ | $$_____/ \____ $$
//| $$$$$$/ | $$$$/| $$$$$$$ \ $/ | $$$$$$$| $$ /$$$$$$$/ | $$$$/| $$$$$$$ \ $/ | $$$$$$$ /$$$$$$$/
// \______/ \___/ \_______/ \_/ \_______/|__/ |_______/ \___/ \_______/ \_/ \_______/|_______/

//@version=6
indicator("Simple Decision Matrix Classification Algorithm [SS]", shorttitle = "DMCA [SS]", overlay=false)
g1 = "Model Parameters",
g2 = "EMA"
g3 = "RSI"
g4 = "MFI"
g5 = "Z-Score"
g6 = "ATR"
training = input.int(850, "Training Length", group = g1)
select_x1 = input.string("Stochastic", "Select Independent Variable X1", ["Stochastic", "RSI", "Z-Score", "MFI", "Volume", "EMA", "SMA"], group = g1)
select_x2 = input.string("Volume", "Select Independent Variable X1", ["Stochastic", "RSI", "Z-Score", "MFI", "Volume", "EMA", "SMA"], group = g1)
select_y = input.string("Red/Green Candle", "Select desired target Classification Prediction", ["Red/Green Candle", "Bullish / Bearish ATR", "ATR Range Breakout"], group = g1)
showtable = input.bool(true, "Show Table", group = g1)
// EMA
emalen = input.int(14, "EMA/SMA Length", group = g2), emasrc = input.source(close, "EMA/ SMA Source", group = g2)
// RSI
rsilen = input.int(14, "RSI Length", group = g3), rsisrc = input.source(close, "RSI Source", group = g3)
// MFI
mfilen = input.int(14, "MFI Length", group = g4), mfisrc = input.source(hlc3, "MFI Source", group = g4)
// Z-Score
zlen = input.int(14, "Z-Score Length", group = g5), zsrc = input.source(close, "Z-Score Source", group = g5)
// atr
atrlen = input.int(14, "ATR Length", group = g6)



atr = ta.atr(atrlen)
sent = close > open ? 1 : 0
sentiment = high - open > open - low ? 1 : 0
rsi = ta.rsi(rsisrc, rsilen)
sto = ta.stoch(close, high, low, 14)
mfi = ta.mfi(mfisrc, mfilen)
z = (zsrc - ta.sma(zsrc,zlen)) / ta.stdev(zsrc, zlen)
ema = ta.ema(emasrc, emalen)
sma = ta.sma(emasrc, emalen)
atr_breakout = high-open > atr or open-low < atr ? 1 : 0

float x1_input = switch select_x1
"Stochastic" => sto
"RSI" => rsi
"Z-Score" => z
"MFI" => mfi
"Volume" => volume
"EMA" => close - ema
"SMA" => close - sma
float x2_input = switch select_x2
"Stochastic" => sto
"RSI" => rsi
"Z-Score" => z
"MFI" => mfi
"Volume" => volume
"EMA" => close - ema
"SMA" => close - sma
float y_input = switch select_y
"Red/Green Candle" => sent
"Bullish / Bearish ATR" => sentiment
"ATR Range Breakout" => atr_breakout

float x1_threshold = 0
float x2_threshold = 0
if select_x1 == "Volume"
x1_threshold := ta.stdev(volume, 14)
else if select_x1 == "Z-Score"
x1_threshold := 0.05
else
x1_threshold := 0.5

if select_x2 == "Volume"
x2_threshold := ta.stdev(volume, 14)
else if select_x2 == "Z-Score"
x2_threshold := 0.05
else
x2_threshold := 0.5
// Function to shuffle a list of indices
shuffle_indices(n) =>
indices = array.new_int(n)
for i = 0 to n - 1
array.set(indices, i, i)
for i = 0 to n - 1
rand_index = math.floor(math.random(0, n))
temp = array.get(indices, i)
array.set(indices, i, array.get(indices, rand_index))
array.set(indices, rand_index, temp)
indices

// Function to shuffle the rows of a matrix
shuffle_matrix(matrix, rows) =>
shuffled_matrix = matrix.new<float>(rows, matrix.columns(matrix), 0)
indices = shuffle_indices(rows)
for i = 0 to rows - 1
for j = 0 to matrix.columns(matrix) - 1
matrix.set(shuffled_matrix, i, j, matrix.get(matrix, array.get(indices, i), j))
shuffled_matrix

radnom_forest_classification(x1, x2, y, train) =>
x1m = matrix.new<float>(train , 2, 0)
x2m = matrix.new<float>(train, 2, 0)
lx1 = x1[1]
lx2 = x2[1]
ly = y
for i = 0 to train - 1
matrix.set(x1m, i, 0, lx1)
matrix.set(x1m, i, 1, ly)
matrix.set(x2m, i, 0, lx2)
matrix.set(x2m, i, 1, ly)
x1m_random = shuffle_matrix(x1m, train)
x2m_random = shuffle_matrix(x2m, train)
int y1_pass = 0
int y1_fail = 0
for i = 0 to train - 2
if matrix.get(x1m_random, i, 0) > 0
if matrix.get(x1m_random, i, 0) >= x1 - x1_threshold and matrix.get(x1m_random, i, 0) <= x1 + x1_threshold
if matrix.get(x1m_random, i, 1) == 1
y1_pass += 1
else if matrix.get(x1m_random, i, 1) != 1
y1_fail += 1
int y2_pass = 0
int y2_fail = 0
int x1_vote = y1_pass > y1_fail ? 1 : 0
for i = 0 to train - 2
if matrix.get(x2m_random, i, 0) >= x2 - x2_threshold and matrix.get(x2m_random, i, 0) <= x2 + x2_threshold
if matrix.get(x2m_random, i, 1) == 1
y2_pass += 1
else if matrix.get(x2m_random, i, 1) != 1
y2_fail += 1
passes = y1_pass + y2_pass

fails = y1_fail + y2_fail
vote = passes > fails ? 1 : 0
x2_vote = y2_pass > y2_fail ? 1 : 0
// X1 independent accuracy
int x1_ia_pass = 0
int x1_ia_fail = 0
int x2_ia_pass = 0
int x2_ia_fail = 0
x2_vote_pass = x2_vote == y
x1_vote_pass = x1_vote == y
for i = 0 to train
if x1_vote_pass
x1_ia_pass += 1
else
x1_ia_fail += 1
if x2_vote_pass
x2_ia_pass += 1
else
x2_ia_fail += 1

x1_accuracy = x1_ia_pass / (x1_ia_pass + x1_ia_fail) * 100
x2_accuracy = x2_ia_pass / (x2_ia_pass + x2_ia_fail) * 100
// Accuracy
int success = 0
int failure = 0
for i = 0 to train - 1
if vote == y
success += 1
else if vote != y
failure += 1
accurcay = success / (success + failure) * 100

[vote, accurcay, y1_pass, y1_fail, y2_pass, y2_fail, x1_vote, x2_vote, x1_accuracy, x2_accuracy]


[result, acc, y1p, y1f, y2p, y2f, x1v, x2v, x1a, x2a] = radnom_forest_classification(x1_input, x2_input, y_input, training)

// Demographics

total_sorted_cases_x1 = y1p + y1f
total_sorted_cases_x2 = y2p + y2f
x1_voted_result = y1p > y1f ? 1 : 0
x2_voted_result = y2p > y2f ? 1 : 0
total_cases = total_sorted_cases_x1 + total_sorted_cases_x2
x1_total_weight = (total_sorted_cases_x1 / total_cases) * 100
x2_total_weight = (total_sorted_cases_x2 / total_cases) * 100

string_Result = result == 1 ? "Bullish" : "Bearish"

if barstate.islast
for lin in line.all
line.delete(lin)
for txt in label.all
label.delete(txt)

// Feature Importance

feature_importance_color(current_variable, other_variable) =>
color feature_color = na
if current_variable > other_variable
if current_variable <= 45
feature_color := color.rgb(5, 46, 7)
else if current_variable > 45 and current_variable <= 55
feature_color := color.rgb(10, 99, 14)
else if current_variable > 55
feature_color := color.rgb(5, 235, 15)
else if other_variable > current_variable
if current_variable > 50 and current_variable <= 55
feature_color := color.rgb(22, 110, 26)
else if current_variable > 55
feature_color := color.rgb(5, 235, 15)
else if current_variable <= 50 and current_variable > 45
feature_color := color.rgb(89, 6, 31)
else if current_variable <= 45
feature_color := color.rgb(247, 0, 75)
feature_color
if barstate.islast
label.new(bar_index - 200, y = 20, text = str.tostring(select_x1), color = feature_importance_color(x1a, x2a), style = label.style_circle, size = size.large)
label.new(bar_index - 250, y = 10, text = str.tostring(y1p), color = color.lime, style = label.style_circle, size = size.large)
label.new(bar_index - 150, y = 10, text = str.tostring(y1f), color = color.red, style = label.style_circle, size = size.large)
label.new(bar_index - 200, y = 0, text = str.tostring(x1v), color = color.purple, style = label.style_circle, size = size.large)
line.new(bar_index - 200, y1 = 20, x2 = bar_index - 250, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 250, y1 = 10, x2 = bar_index - 200, y2 = 0, color = color.gray, width = 3)
line.new(bar_index - 200, y1 = 20, x2 = bar_index - 150, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 150, y1 = 10, x2 = bar_index - 200, y2 = 0, color = color.gray, width = 3)
line.new(bar_index, y1 = 20, x2 = bar_index - 50, y2 = 10, color = color.gray, width = 3)
line.new(bar_index - 50, y1 = 10, x2 = bar_index, y2 = 0, color = color.gray, width = 3)
line.new(bar_index, y1 = 20, x2 = bar_index + 50, y2 = 10, color = color.gray, width = 3)
line.new(bar_index + 50, y1 = 10, x2 = bar_index, y2 = 0, color = color.gray, width = 3)
label.new(bar_index, y = 20, text = str.tostring(select_x2), color = feature_importance_color(x2a, x1a), style = label.style_circle, size = size.large)
label.new(bar_index - 50, y = 10, text = str.tostring(y2p), color = color.lime, style = label.style_circle, size = size.large)
label.new(bar_index + 50, y = 10, text = str.tostring(y2f), color = color.red, style = label.style_circle, size = size.large)
label.new(bar_index, y = 0, text = str.tostring(x2v), color = color.purple, style = label.style_circle, size = size.large)
label.new(bar_index - 100, y = - 10, text = str.tostring(result), color = color.orange, style = label.style_circle, size = size.large)
line.new(bar_index - 200, y1 = 0, x2 = bar_index - 100, y2 = -10, color = color.gray, width = 3)
line.new(bar_index, y1 = 0, x2 = bar_index - 100, y2 = -10, color = color.gray, width = 3)
label.new(bar_index - 100, y = 30, text = str.tostring(select_y), color = color.navy, style = label.style_circle, size = size.large)
line.new(bar_index - 100, y1 = 30, x2 = bar_index - 200, y2 = 20, color = color.gray, width = 3)
line.new(bar_index - 100, y1 = 30, x2 = bar_index, y2 = 20, color = color.gray, width = 3)

data = table.new(position.middle_left, 10, 10, bgcolor = color.rgb(0, 0, 0), frame_color = color.white, frame_width = 3)

if showtable
table.merge_cells(data, 1, 0, 2, 0)
table.merge_cells(data, 3, 0, 4, 0)
table.cell(data, 1, 0, text = str.tostring(select_x1), bgcolor = color.new(color.gray, 75), text_color = color.white, text_halign = text.align_center)
table.cell(data, 3, 0, text = str.tostring(select_x2), bgcolor = color.new(color.gray, 75), text_color = color.white, text_halign = text.align_center)
table.cell(data, 1, 1, text = "X1 Independent: ", text_color = color.white)
table.cell(data, 2, 1, text = str.tostring(select_x1), text_color = color.white)
table.cell(data, 1, 2, text = "X1 Total Sorted Cases: ", text_color = color.white)
table.cell(data, 2, 2, text = str.tostring(total_sorted_cases_x1), text_color = color.white)
table.cell(data, 1, 2, text = "X1 Weighted Impact: ", text_color = color.white)
table.cell(data, 2, 2, text = str.tostring(math.round(x1_total_weight, 2)), text_color = color.white)
table.cell(data, 1, 3, text = "X1 Independent Accuracy: ", text_color = color.white)
table.cell(data, 2, 3, text = str.tostring(math.round(x1a, 2)) + "%", text_color = color.white)
table.cell(data, 1, 4, text = "X1 Voted Result: ", text_color = color.white)
table.cell(data, 2, 4, text = str.tostring(x1_voted_result), text_color = color.white)
table.cell(data, 3, 1, text = "X2 Independent: ", text_color = color.white)
table.cell(data, 4, 1, text = str.tostring(select_x2), text_color = color.white)
table.cell(data, 3, 2, text = "X2 Total Sorted Cases: ", text_color = color.white)
table.cell(data, 4, 2, text = str.tostring(total_sorted_cases_x2), text_color = color.white)
table.cell(data, 3, 2, text = "X2 Weighted Impact: ", text_color = color.white)
table.cell(data, 4, 2, text = str.tostring(math.round(x2_total_weight, 2)), text_color = color.white)
table.cell(data, 3, 3, text = "X2 Independent Accuracy: ", text_color = color.white)
table.cell(data, 4, 3, text = str.tostring(math.round(x2a, 2)) + "%", text_color = color.white)
table.cell(data, 3, 4, text = "X2 Voted Result: ", text_color = color.white)
table.cell(data, 4, 4, text = str.tostring(x2_voted_result), text_color = color.white)
table.merge_cells(data, 1, 5, 4, 5)
table.cell(data, 1, 5, text = "Model Accuracy: " + str.tostring(math.round(acc, 2)), text_color = color.white, text_halign = text.align_center)
table.merge_cells(data, 1, 6, 4, 6)
table.cell(data, 1, 6, text = "Model Prediction: " + str.tostring(string_Result), text_color = color.white, text_halign = text.align_center)



WTF.

I'm hoping @qldfrog can make something of this.



jog on
duc
A long long time ago, while studying for my engineering degree with a specialisation in AI, we worked with neural networks.
we are talking 1990s so no neural chips, very very limited hardware and software.
These basic neural networks had very limited nodes both in " horizontal" numbers and layers.
at first read, what is implemented is very similar BUT the KEY notion was the training:
you feed sets of input situations and known outcomes to slowly tweak each node to, lets call it, learn how to react..actually just set internal weightings
so the value and accuracy of your whole system is based on the proper training.
i see no mention of training there.just one quick read so i might have missed that part.
i am unfamiliar with the pinecript language but noted among other lines
"training = input.int(850, "Training Length", group = g1)"
and it follows using a result based matrix set by this training
behind all this code, the only real key item is this training based on this g1 group.

so in a nutshell, get a basic neural network, feed it (train) RSi and other indicators as well as past historic outcomes
then run it on a current stock with current indicators, et voila, a magic ball to the future
maybe but maybe not
i have toyed with this idea in the past.
quite often believe me to get an edge few have
but well aware of the concept flaws, the criticality of the training data, and the black box approach which means you can not slightly correct or tweak the network, there is no specific line of code to debug/fix for a given flaw.
once trained it is static, do you keep a perpetual training,?
While i would trust it statistically, maybe for a trader firm, on one given trade, i would better trust direct programming based on indicators
if rsi between x and y then buy/sell
my brain fart of the day
 
When the U.S. government heads toward a partial shutdown, a natural instinct is to anticipate dire consequences for financial markets and the economy.
  • The history doesn't support it, however.
Why it matters: In the past, at least, government shutdowns have been a micro story, not a macro story.
  • That is, they have caused plenty of annoyance and disruption to the work of individual agencies but haven't had any meaningful impact on headline numbers like GDP or unemployment.
  • The open question is whether the Trump administration's handling of the imminent shutdown will change that.
By the numbers: The most recent shutdown lasted 35 days, from December 2018 through mid-January 2019. In those two months, payroll employment grew by an average of 221,000 jobs, better than the 166,000 a month notched for the entirety of 2019.
  • In October 2013, a 16-day shutdown coincided with 220,000 jobs added that month, better than the 192,000 average for that year.
  • You similarly can't see much impact in data like the unemployment rate, GDP, or retail sales.
  • Even initial jobless claims, perhaps the best real-time indicator of job market distress, doesn't show much. In the week ending January 12, 2019, in the thick of the last shutdown, there were 218,000 unemployment benefit claims, almost exactly the 2019 average (217,700).
Between the lines: Traditionally, government workers remain employed during a government shutdown, even if they are not allowed to work during that time.
  • They do not consider themselves unemployed, don't file for jobless benefits, don't radically cut back on their spending, and otherwise don't behave in ways that would shape the macroeconomic landscape.
  • Next month's unemployment rate might temporarily rise by as much as 0.2 percentage point, "if furloughed employees are categorized as they have been in prior shutdowns," economists at Goldman Sachs wrote in a client note over the weekend.
Yes, but: The Trump administration is threatening to use a potential government shutdown this week to permanently dismiss many thousands of federal employees.
  • It is unclear if this is legal or plausible, given that many departments have already experienced substantial cutbacks this year.
What they're saying: "In a shutdown, [the Office of Management and Budget] has authority to determine which employees are 'essential' and need to continue working and which are furloughed," Sarah Bianchi of Evercore ISI wrote in a note.
  • "OMB could decide to furlough a larger-than-usual share of federal workers. However, any attempt to immediately fire 'non-essential' workers would likely prompt legal challenge."




Newly published plans from the Department of Labor show that a shutdown this week will delay critical economic releases — Friday's jobs report and more — at a critical moment for the Federal Reserve.
Why it matters: If lawmakers fail to strike a deal, a shutdown could drag on for weeks — postponing the publication of jobs and inflation indicators at a time when the Fed is considering how much to cut rates with sticky inflation and weak hiring.
Flashback: The Fed was grappling with the potential of a debt ceiling breach during the 2013 partial government shutdown. That's not the case this time.
  • Still, delayed economic data releases complicated the Fed's assessment and communications about the economy.
What they said: A Fed staff economist briefed officials that delayed publications "increased uncertainty about the economic outlook," according to a transcript of the central bank's late-October policy meeting in 2013. That transcript gives some insight into how the Fed thinks about situations like this.
  • "Given the noise that the government shutdown will introduce into the next employment report, it may be some time before we get a clearer read on whether the slowing we are seeing in the pace of labor market improvement is just a temporary step down," said then-Fed vice chair Janet Yellen.
Reality check: "Not only was the data flow interrupted by the government shutdown, but it is also hard to interpret the data and anecdotal information we do have, because they are colored by households' and businesses' reactions to the drama in Washington," Charles Evans, who led the Chicago Fed at the time, said.
  • Then-Cleveland Fed president Sandra Pianalto warned that the BLS shutdown could impact inflation data for months to come.
  • "The errors due to missed price observations at the individual item level will be the largest in the October data, but [those] errors can persist for up to six months, as not all items are surveyed every month," Pianalto said.
The intrigue: "A reduction in quality of data collected might impact the quality of future estimates produced," the Labor Department said in a memo this morning.
The other side: At the meeting, New York Fed president John Williams said the shutdown "complicated our job ... but we are far from flying blind," acknowledging private-sector data that helped fill the information gap.
  • Of note: The BLS continued to be funded and released economic reports as usual during the 2018 government shutdown.
The bottom line: The Fed says it will make interest rate decisions based on incoming economic data. That will be complicated if there is less government data than usual to rely on when policymakers meet Oct. 28-29.




It’s hard to believe we’re debating a potential bubble right now considering we had a mini-blink-and-you-missed-it bear market in April.

That downturn feels like an out-of-body experience because it happened so quickly.

That first week or so of April saw back-to-back down days of -5% and -6%. A few days later the market was up almost 10% in a single day and we were off to the races.

These V-shaped rallies feel like a product of the information age where markets move faster than ever and are being driven more and more by outlier events.

That’s how it feels at least.

However, if you look at the average path of every bear market since 1950, the current iteration looks pretty darn close:

price-path-before-after-bear-market-lows-10-scaled.png
It’s not perfect but you get the waterfall drop followed by the big recovery on the other side of it. V-shaped rallies are nothing new. That’s the norm.

We’ve seen a similar profile in sector performance this year:

sp500-sector-performance-year-to-date-27-scaled.png
Technology and communication services (basically tech) both experienced massive drawdowns earlier this year but are now each sitting on 20%+ gains for the year. That’s another V.

Of course, these relationships are not set in stone. Consumer stocks also got hammered in the downturn but haven’t seen similar year-to-date gains.

Bear markets have some symmetry to them, at least in the short-term.

In the long-term, bull markets versus bear markets are asymmetric. Things are not balanced.

Look at the gains versus losses:

sp500-historic-bull-and-bear-markets-23-scaled.png
The bear markets are blips.

To be fair, those losses don’t feel like blips when you’re in them. Bear markets can be brutal. Losing money is not fun. Seeing a large portion of your portfolio get vaporized can cause you to question your sanity as an investor.

And yet…the bull markets completely overwhelm the bear markets.

It’s not even close.

That’s the beauty of the stock market. Despite all of the lousy things that can and will happen at times, it still pays to stay invested over the long haul.

You just have to survive many short hauls to get there.






From FT:

This summer, many systematic, algorithm-powered trading strategies suffered an abrupt and mysterious pummeling that was somewhat reminiscent of the infamous “quant quake” of 2007. It wasn’t nearly as violent as in 2007 — it was more an unpleasant quiver than an earthquake — but it was enough to fray nerves in some corners of the quantitative hedge fund industry. The reversal might have been started by a “garbage rally” in heavily shorted stocks, but some think that it might have been exacerbated by one of the biggest new trends in quant investing: the growing overlap between market-making trading firms such as Citadel Securities, Hudson River Trading or Jane Street, and big hedge funds such as DE Shaw, Millennium, Point 72 or Qube Research & Technologies. Some in the industry are sceptical that this increasing overlap was a factor in the July “quant quiver”, pointing out that the strategies that were the worst hit were mostly longer-term ones, rather the those using quicker signals, where competition is becoming more ferocious. Nonetheless, both proprietary trading firms and hedge funds concede that two industries — that for years virtually operated in separate worlds — are now starting to come together. As a senior executive at one of the big multi-strategy hedge funds told FT Alphaville:

There are times when an industry’s structure changes. We’re now in the early stages of seeing a reorganisation of systematic trading, where some successful prop trading firms are going to increasingly resemble hedge funds, and some successful hedge funds will start to look like prop trading firms . . . There is an interplay and growing overlap in their skillsets and strategies. It will be interesting to see how it plays out, but they are definitely beginning to converge. This trend has been quietly emerging since 2020-21, but has become much more apparent in the past year or so. The confluence also has myriad implications for both industries — and the markets where they’re increasingly colliding. This may test your patience, but to understand how it happened and why it’s so interesting it’s probably worth first diving briefly into the parallel histories of high-frequency trading and the quantitative hedge fund strategy known as “statistical arbitrage”. Feel free to skip the next two sections if you know all this stuff already. 📈📉📈📉📈📉📈📉📈📉📈📉


High-frequency trading has evolved dramatically in the decades since its genesis, whether you think this was the NYSE’s first electronic “designated order turnaround” system in the 1970s, the “bandits” that preyed on the Nasdaq’s Small Order Execution System in the 1980s, or the explosion of automated trading on “electronic communication networks” in the 1990s. The cottage industry’s first big inflection point came in 2005 with the SEC’s introduction of Regulation National Market System — or RegNMS as it’s usually called. By modernising the US equity market structure and encouraging greater competition, this became “the final structural move that set the stage for the current electronic trading revolution”, as one academic noted in a 2010 study. Just how far the revolution had come first became apparent to the general public in the 2010 “Flash Crash”, when the US stock market suddenly careened lower at speeds humans struggled to comprehend. The normie view of high-frequency traders as financial parasites that ruin markets was then crystallised by Michael Lewis’ bestselling 2014 book Flash Boys.

Many in the industry — who saw themselves as geeky disrupters that stuck it to Wall Street and made trading cheaper for investors — were horrified at their portrayal. In fact, when Ari Rubinstein, the head of Global Trading Systems, first heard that one of his favourite authors was writing a book about his industry he assumed that they’d naturally be the heroes of the tale. As he told the FT a few years ago: I thought, finally, someone is going to glorify what we’ve been able to do. A bunch of people were able to disrupt the industry, create a lot of efficiency, save people a lot of money and get rid of the middlemen in the process — and I was like, ‘Holy cow! is he going to call us?’ And then, when I found out that, ‘Oh no, you’re the villain’, I was really surprised. Politico really nailed the zeitgeist with this illustrative gif back in 2016. However, the classic view of HFTs as a monolithic group of purely algorithmic, hyperactive speed merchants was never entirely correct, and is now a little outdated. Pure speed is still essential to swaths of bread-and-butter market-making. What was once measured in milliseconds (thousandths of a second) became nanoseconds (billionths of a second) in the noughties, and is today often done in picoseconds (one trillionth of a second).

In this space, microwave towers and “co-location” are still important. But “low latency trading” — as people in the industry usually call this form of HFT — is butting up against the limits of physics. Moreover, intense competition has made it much less profitable. As one HFT executive says: “There is no alpha, it’s all latency. That gets all the focus, but it doesn’t actually make much money.” As a result, there’s been a massive amount of consolidation in recent years, with many early HFT pioneers falling by the wayside and others simply stagnating. The new HFT royalty are therefore primarily (if not exclusively) firms that have evolved into “proprietary” trading firms that also make bets with their own capital — as opposed to only pocketing the spreads between two-sided quotes on securities — and which have broken out from the pure speed game by holding positions for minutes, hours or even days. The best examples are probably firms such as Jane Street, Citadel Securities, DRW, Susquehanna International Group and Hudson River Trading. Sure, most of these firms still do a lot of classic high-speed, high-frequency, high-volume trading, but increasingly the really big profits are coming from prop trading and slower signals. And this is starting to bring them into territory historically ploughed by hedge funds that pursue a strategy called “statistical arbitrage”. 📈📉📈📉📈📉📈📉📈📉📈📉


Sometime in the early 1980s a programmer named Gerry Bamberger pioneered something called “pairs trading” at Morgan Stanley. It quickly proved a phenomenon. Bamberger must have cut an odd figure at Morgan Stanley. This was Wall Street’s Waspiest firm, and he was a tall, cerebral Orthodox Jew with a heavy smoking habit who ate a packed tuna salad sandwich for lunch every single day. But the strategy he developed became a money machine for the bank, which called his new desk Advanced Proprietary Trading. Pairs trading involved finding pairs of securities that were usually closely correlated — like Pepsi and Coke, Royal Dutch and Shell, or Berkshire Hathaway’s different share classes — but occasionally veer off in opposite directions. You then short one and long the other, betting that the historical link would reassert itself. Over time this evolved into the broader strategy dubbed statistical arbitrage, where you constantly scour markets for thousands of opportunities like this, hedge out the overall stock market risk and try to just generate pure, sweet market-beating alpha.

These stat-arb bets can range from simple pairs trading to the more complex, such as arbitraging divergences in the price of US equity market exposure through individual stocks, marketwide ETFs, index options and futures. It quickly became a big deal, as former Morgan Stanley risk management supremo Richard Bookstaber later wrote in his book A Demon of Our Own Design: Thanks to Gerry Bamberger, who started as a programmer on Morgan’s equity desk, the way trading was done and the function it performed had changed. As a result of his work, the computational power for statistical analysis was unleashed on the markets and — using the newfound execution capabilities of the equity market — a machine was created to harvest opportunities to provide liquidity. Bamberger had moved at least one segment of the market from that of hunter-gatherer to farming. Bamberger later fell out with Morgan Stanley and left for Ed Thorp’s pioneering quant hedge fund Princeton/Newport Partners.


Morgan Stanley’s APT desk was taken over by Nunzio Tartaglia, a famously sweary Jesuit-educated former physicist, who for a period took it to new heights. In 1987 APT reportedly made $50mn of profits for Morgan Stanley, a fortune at the time and particularly remarkable given the Black Monday crash that year. However, by the end of the decade returns started to fizzle, and many of the top quants on Morgan Stanley’s APT desk headed to the exit. Among them was a brilliant technologist called David Shaw. He started his own hedge fund built on statistical arbitrage — which is today’s $70bn DE Shaw. DE Shaw in turn birthed Two Sigma — another giant of the quant hedge fund industry — while Morgan Stanley’s APT desk was eventually resurrected in the form of Peter Muller’s Process Driven Trading Group. In 2012 this was spun off of the bank as the hedge fund PDT Partners. Between them, DE Shaw, Two Sigma and PDT reportedly manage roughly $150bn, much of it in stat-arb strategies.

Renaissance Technologies fabled Medallion fund is also said to mostly consist of statistical arbitrage. Just like with HFTs, the stat-arb world is not monolithic. Strategies and holding periods can vary enormously. Some hedge funds might use signals that only hold positions for a few hours, but it’s usually days, and they can be weeks. HFTs also look for often pricing discrepancies that they can arbitrage, but over very different timeframes, and for a long time they overwhelmingly went home “flat” — in other words, most positions were closed out at the end of every trading day. After all, while hedge funds manage other people’s money, prop firms usually only have their own partners’ capital to play with, so they didn’t want to lug around lots of risky positions on their balance sheet for too long. As a result, they evolved as essentially two different industries, with remarkably little overlap despite being mostly staffed by the same types of programmers, mathematicians and scientists and pursued vaguely similar systematic financial trading strategies. Until now, at least. 📈📉📈📉📈📉📈📉📈📉📈📉


Industry insiders say the overlap first began in earnest in the wake of the Covid-19 trading boom, when several trading firms made so much money that they had to find new places to deploy it. After all, in classic “low-latency” strategies there’s a limit to how much capital you can deploy at any given time, given how swiftly securities are traded. What works with $1bn can actually work less well with $10bn. However, longer term trades — and by longer term, we’re obviously talking in relative terms — allow firms to deploy more resources, in terms of capital, people and technology. As the head of one large US trading firm tells Alphaville: It’s a capacity issue. To keep growing revenues you need to take larger positions.

There’s been a lot of investment in research and compute, and those are very high fixed cost investments, so you want as much investment capacity as possible to earn that out. At the same time, many trading firms saw with envy how much money Jane Street in particular was starting to make, mostly because of its ability and willingness to carry positions for a bit longer than the norm — the result of its role as a major market-maker for ETFs. In the first half of 2020 alone Jane Street notched up $8.4bn of net trading revenues, more than twice that of its rival Citadel Securities. Soon enough, more and more trading firms started adding “mid-range” trading strategies to their arsenal, which are now said to be particularly strong profit centres at the likes of Hudson River Trading. At HRT these signals are housed a separate unit called Prism, which reportedly notched up profits of more than $2bn last year. At Tower Research, mid-frequency trading now accounts for about 25-30 per cent of revenues, up from under 10 per cent 2-3 years ago, according to a person familiar with the matter.

Citadel Securities has largely remained a classic market-maker — given that Ken Griffin’s separate hedge fund Citadel already does plenty of prop trading — but it too is said to be holding positions for longer these days. A spokesperson for the company said: “Depending on size, product, risk and liquidity dynamics, we warehouse this risk over a range of time durations, sometimes up to weeks.” Naturally, this has meant that many hedge funds have been eyeing with equal jealousy the huge profits these trading firms have made in recent years. After all, many of the hedge fund industry’s top names are closed to new investment because their existing strategies also have capacity constraints. As a result, they are constantly on the prowl for new ones that might allow them to keep more investor money rather than sending billions of dollars' worth of gains out the door every year. Moreover, faster, systematic strategies typically boast high “Sharpe ratios” — a measure of investment returns relative to their volatility — which can make a fund’s overall results look prettier, as one senior hedge fund executive notes:


Hedge funds need high Sharpe capacity strategies, because there is a barbell sort of complementarity between high Sharpe strategies with low capacity and high capacity strategies with low Sharpe ratios. So hedge funds want more high Sharpe strategies — and those are typically lower latency strategies — in order to support strategies like commodities or some fixed income. As a result, hedge funds with stat-arb strategies and prop trading firms are increasingly competing in trading strategies with holding periods ranging from a few hours to a few days. Here’s a chart from Goldman’s last annual survey of the quant hedge fund industry, which shows how the estimated market footprint of prop traders has expanded dramatically in recent years (zoomable version): Some industry insiders argue the convergence is mostly a case of prop trading firms rolling their tanks on to the stat-arb lawn, rather than stat-arb hedge funds also speeding up and encroaching on prop trading turf.

But the head of one major quantitative hedge fund told Alphaville that he was definitely seeing a move towards lower-latency trading signals by his industry. At the very short end [of latency], opportunities have compressed; at the very long end [of investment strategies], premia are crowded. Naturally, capital and talent migrate towards the middle. So you see prop firms with execution DNA stretching into multi-day signals, while classical stat-arb firms accelerate their cycles. For now, prop trading firms are said to have been more successful in adding “slower” trading strategies than hedge funds have been at speeding up. As one quant hedge fund manager we spoke to observed: “It’s easier to go from building a Ferrari to building a Volkswagen, than from building a Volkswagen to a Ferrari.” However, some prop trading firm executives say this understates the apparent success of some hedge funds such as Qube Research & Technologies and long-established quant powerhouses like DE Shaw, and reckon the trend will become even more pronounced in the coming years. As one put it to us: The Venn diagram never overlapped before, and now it does. It started overlapping in a tiny way maybe five years ago, but in five years’ time the overlap will probably be even bigger than it is today. 📈📉📈📉📈📉📈📉📈📉📈📉


So what does this all mean? Why do we even care? We shouldn’t, on a cosmic scale. But here are a handful of implications that Alphaville thinks are worth bearing in mind, in roughly ascending order of importance. 1️⃣ Hedge funds and prop trading companies are going to be increasingly competing not just for entry-level talent — freshly-graduated computer scientists, mathematicians and general brain boxes — but also for mid-career people. Millennium’s fateful poaching of two Jane Street traders is therefore probably just the beginning. We’re going to see more of this happening — given that some hedge funds can pony up $100mn packages for portfolio managers — but we’re also likely going to see quant hedge fund people migrate to the prop trading world. After all, even the interns make stupid money. 2️⃣ Prop trading firms are becoming increasingly important clients of Wall Street, and this is just going to become even more pronounced. On the current trajectory they soon might rival hedge funds and private equity for importance. Historically, prop trading firms operated fairly separately from the banks.

They might route trades to or through them, but didn’t rely on them in the symbiotic way that hedge funds do. As Jarrod Yuster, the chair and CEO of trading tech provider Pico, says: It’s a very technology-intensive business, and generally they don’t hold positions overnight. You don’t need financing for that, so the business they offered banks was just execution and trading fees. Banks therefore valued quant funds more than HFTs. However, as prop trading firms have begun to spread their wings they need more financing and other services. As a result, they have grown radically in importance to the prime brokerage units of big banks that have usually catered only to hedge funds — even though the trading firms are in many respects rivals to other parts of the same bank. As Goldman Sachs’ markets supremo Ashok Varadhan told IFR earlier this year: As you grow and become more relevant, there are going to be times when your clients will be your competitors and you just have to manage through that and have the maturity to realise that you’re going to collaborate in some areas and compete in others.


3️⃣ Prop trading firms are going to be raising more external capital, and hedge funds will become increasingly tempted to hive off their best trading strategies in internal funds for their own partners and employees. In fact, both of these things are already happening — at least in some form. Citadel Securities, Hudson River Trading and Jane Street have all tapped the debt markets to boost the firepower offered by their retained profits. Given how much money these firms have made in recent years they may never want or need to start more traditional fund-like investment vehicles, but Tower Research — one of the HFT pioneers — has talked to investors about doing so, and several industry insiders predict that “proper” HFT-powered long-short equity funds will inevitably emerge. At the same time, some of the higher-profile star-arb hedge funds are either already entirely employee money (Renaissance’s Medallion) or probably heading in that direction (DE Shaw’s Valence fund). It’s natural that more and more successful hedge funds start housing their low-capacity, high-Sharpe quasi prop-trading strategies in internal funds — even if it annoys investors no end and needs to be done very carefully.


4️⃣ Increasing competition in mid-frequency trading — generally said to be in the 1-5 day range — could cause crowding in some signals. The dangers of this crowding are ramped up by the growing use of leverage to maximise profits. There are good reasons to be sanguine about this. A more diverse set of participants is generally a healthy thing for a market. Prop trading firms overwhelmingly deploy their own money — the stickiest capital there is. Both prop firms and quant hedge funds are as a rule pretty obsessive about risk, and particularly assiduous about monitoring for signs of herd-like behaviour. However, as the head of one big trading firm told Alphaville: It is a concern how much crowding there is in stat arb signals right now. July was a sign that things could be very crowded . . . It’s unclear how much of it is due to this, but there’s a lot of money chasing similar strategies and signals in similar instruments, which can cause correlated drawdowns. Where on the Richter scale the next quant quake will measure is beyond Alphaville. But we can see that having nearly two-thirds of US equity trading volume — or ca 20x the total trading volume of the entire long-only investment fund industry — potentially getting locked into correlated drawdowns might not be ideal.



Screenshot 2025-09-30 at 9.10.56 AM.png


Full:https://www.wsj.com/politics/nation...3?st=5TqPdT&reflink=desktopwebshare_permalink


Screenshot 2025-09-30 at 9.12.08 AM.png


Full:https://www.ft.com/content/2a4d7883-e9b5-4a98-b245-76232e70d3df


And late breaking:https://www.foxnews.com/politics/tr...834&itbl_campaignId=15117946&utm_medium=Email


Which rather ties in with the above.


jog on
duc
 
Last edited:
Top