From seekingalpha.com

Apple Unveils iPhone 15 And Other New Products

Justin Sullivan/Getty Images News

Apple Inc.’s (NASDAQ:AAPL) just-announced iPhone 15 Pro will be the first smartphone to use Taiwan Semiconductor Manufacturing Company’s (TSM, “TSMC”) N3 process, often referred to as “3 nm.” The new process enables a host of new features for iPhone, including a more powerful graphics section with hardware accelerated ray tracing, and an AI accelerator that is twice as fast as the previous iPhone Pro generation. Although Apple won’t be able to keep N3 all to itself, for now iPhone has leapt ahead of the competition in almost every category.

iPhone 15 Pro

The iPhone 15 Pro in Natural Titanium (Apple)

Separating fact from fancy on TSMC’s N3 “3 nm” process

A year ago, I wrote that Apple would be first to “3 nm,” and that has come to pass, although the route has been longer and more tortuous than expected. Also as I expected, the first Apple device to feature TSMC’s N3 process is iPhone, not the Mac as some had hoped.

For almost any new process, it can be assumed that the first commercial use will be for small area processors such as those used in smartphones. This is simply to improve chip yields per wafer when the process is relatively young and yields are suboptimal. Starting N3 off with the relatively large M-series Mac SOCs never really made sense.

But before I delve into the advantages conferred by the new process, I need to take a few paragraphs to dispel some myths about N3, which unfortunately, Apple itself is now guilty of promulgating. In the September 12 event video, Sribalan Santhanam, VP of Silicon Engineering comes out and says “elements of these transistors are just over 12 silicon atoms wide.” That’s at least very misleading.

Santhanam

Apple, via YouTube

Without knowing what “elements” Santhanam was referring to, I can’t say absolutely that he’s wrong. In some areas, deposition layer thicknesses may be that thin, but that’s not really important from the standpoint of transistor areal density, which is how many transistors you can put on the flat surface of the chip, measured in millions of transistors per square millimeter.

Manufacturers characterize how tightly transistors can be packed side by side on a wafer by a few key dimensional parameters, such as the spacing between the metal contacts of adjacent transistors, referred to as the Contacted Poly Pitch or CPP.

At the International Electron Devices Meeting in 2022, TSMC presented a paper on its N3 process, and disclosed the CPP for N3. This has been summarized by Scotten Jones in SemiWiki in a table shown below:

N3 scaling comparison.

Scotten Jones in SemiWiki

TSMC didn’t reveal any of the other characteristic dimensions shown in the table, which is why they’re blank for N3. But even scaled proportionally by the change in CPP, none of them would approach the 12 silicon atom width, which is about 1.6 nm.

And this is something that Intel (INTC) has been complaining about for some time, justifiably. There’s really nothing in terms of meaningful physical transistor dimensions that corresponds to 3 nm, let alone some fraction of it.

Manufacturers like TSMC and Intel have been moving away from calling out nanometers. TSMC calls its processes, N7, N5, N3, and Intel calls its processes Intel 7, Intel 4, Intel 3, etc. But it still makes for good marketing copy for foundry customers like Apple to refer to nanometers.

We don’t yet know what transistor density Apple was able to obtain for the A17, but TSMC has been claiming that N3 would offer a 70% increase in transistor density compared to N5. Apple’s first N5 processor was the A14 Bionic, which had a total transistor count of 11.8 billion on a silicon die size of 88 mm^2 for a practical density of 134 million transistors per mm^2.

The new N3 process will likely increase transistor density to about 227 million transistors per mm^2. For the first generation N3 processors, Apple has likely opted for a chip size reduction in order to improve yield, while still increasing transistor count. Apple claims the new A17 Pro has 19 billion transistors, for a likely chip size of about 84 mm^2.

The A17 Pro has more than 50% greater transistors per chip than A14. Last year’s A16 Bionic had “nearly 16 billion transistors,” so the A17 Pro has at least 3 billion more than A16. 3 billion transistors translates into a lot of additional circuitry and capability. In the rest of this article I’ll look at how Apple has chosen to use this new capability.

How Apple has used the 3 billion extra transistors of the A17

The A17’s transistor count is only about 20% greater than A16, so Apple had to be careful how it spent the new transistors. Apple apparently didn’t lavish many new transistors on the CPU, which is unchanged in core count and only about 10% faster than the A16.

This makes sense, since the A16’s CPU performance was already better than the nearest competition, the Qualcomm (QCOM) Snapdragon 8 Gen 2, which can be found in the Samsung (OTCPK:SSNLF) Galaxy S23 Ultra. Although it’s only one of many benchmarks, in Geekbench CPU testing, A16 mops the floor with the Snapdragon:

Geekbench Test

Apple A16

QCOM Snapdragon 8 Gen2

CPU Single Core

2520

1878

CPU Multi Core

6387

4973

Both the A16 and Snapdragon are fabricated on TSMC’s N4 process, so the Snapdragon wasn’t disadvantaged in terms of process. Apple’s custom ARM CPU cores continue to lead the industry in performance and power efficiency.

Where Apple did spend its extra transistors can be seen from this summary of A17 improvements:

A17 Pro improvements

Apple

Apple completely revamped the GPU with a new shader architecture and added an additional GPU core compared to A16. The GPU now features hardware accelerated ray tracing, a first for Apple.

Although the GPU got a big boost, I suspect that Apple saved most of its transistors for the new A17 Neural Engine. Although it has the same number of cores as the A16, it’s twice as fast.

New features enabled by A17: better gaming with playable ray tracing, and transformer AI on device

What do these new hardware features mean for users? The latest generation of game consoles have featured hardware accelerated ray tracing for years. Since the advent of these consoles, Apple has not been able credibly to claim “console level performance” for iPhone, iPad, or even Mac.

With hardware accelerated ray tracing, which is augmented by the Neural Engine, Apple can begin to catch up with console state of the art and attract more game developers to the platform. Ubisoft is there with The Division Resurgence:

Scene from the Division Resurgence

Apple, via YouTube

Resident Evil Village and Resident Evil 4 (remake) will also be on the platform. The latest version of the Assassin’s Creed franchise, Mirage, is also coming to iPhone.

Scene from Assassin's Creed Mirage

Apple, via YouTube

Revitalizing gaming on Apple platforms is vital to expanding the constituency for Apple Silicon. One of the major shortcomings that reviewers find for Apple computing devices is lack of support for popular PC and console games.

Furthermore, performance for Rosetta-converted Intel-native games on the Mac has not been very good. Now that Apple has fully converted its Mac lineup to Apple Silicon, all of Apple’s computing products will be able to build on the A17’s graphics.

Apple seems to be committed to on-device AI, and that now includes pre-trained transformer models. Here, Apple is swimming against the prevailing current, which mainly places generative pre-trained transformers in the cloud because of their very demanding computational requirements. But transformers can also run on local devices if the scope of their functions is sufficiently narrow.

Apple now uses a transformer hosted in the Neural Engine as the basis for Siri speech recognition and text completion in messaging. There is hardly any function of iPhone that is not enhanced by AI in some way.

Computational photography is AI based and offers a new function to automatically find persons in an image to support portrait mode. Computational photography also supports capturing what Apple calls “spatial video,” or stereo video using the main 48 MP camera as well as the 12 MP ultrawide camera. These videos can then be viewed in stereoscopic 3D on Vision Pro.

Even plain old voice calling gets a boost with AI-based background noise suppression. Background noise suppression is actually a difficult problem when you don’t have a separate background audio signal free of user’s voice. But it’s a good problem for AI, and Apple’s approach is probably the most effective available.

Investor takeaways

Now that every Apple platform is based on Apple Silicon, it’s become more important than ever for investors to pay attention to the iPhone launch. Where Apple goes with the A series SOCs, the rest of the product line will follow.

A good example is the new Series 9 Watch, which received the new S9 System in Package (SiP). For the first time, even the little S9 got a Neural Engine, and transformer based Siri dictation along with it.

Apple Watch Series 9 SIP

Apple, via YouTube

The emphasis on graphics enhancement in A17 Pro is as much a response to what’s happening in the Mac product line as it is to iPhone itself. At the high end of the line, where the M2 Ultra is used, CPU performance is very impressive, but GPU performance, not so much.

In the realm of high end graphics workstations, Apple really can’t compete with PCs equipped with Nvidia’s (NVDA) RTX 4090. At least not yet. But the A17 provides a good foundation on which to build a much more capable GPU for iPads and Macs.

And as the world rushes to embrace generative pre-trained transformers (GPTs), Apple knew it needed to greatly enhance its on chip AI if it had any hope of keeping up. Of course, Apple can’t run a ChatGPT on an iPhone, but it can still make use of transformer models to perform useful functions and has started to do so with A17.

We’ll see even more capable Neural Engines based on the same architecture in future Macs. For the Mac, a scaled down ChatGPT is not inconceivable, perhaps as a Siri replacement (but still called Siri). This could be very important for Apple.

GPTs basically serve as intelligent search engines and will probably replace the traditional search engine over time. Apple hasn’t been able to be competitive in search and has remained dependent on Google (GOOG). With the advent of GPTs, that could change. And it’s certainly conceivable that Apple could provide its own cloud-based GPT service exclusively for Apple devices, which could work in conjunction with on device AI.

Overall, iPhone 15 Pro is a very capable little computer, which can be connected to a 4K monitor via USB-C. Future iPads will use the A17 or processors based on it, and these will be full-fledged computers in their own right with keyboards, mice and external monitors connected via USB-C.

iPhone 15 and 15 Pro will do very well in the marketplace, and may even revive the smartphone industry, which has been in the doldrums. I look for iPhone to resume revenue growth in fiscal 2024. I remain long Apple and rate it a Buy.

Consider joining Rethink Technology for in depth coverage of technology companies such as Apple.

[ For more curated Apple news, check out the main news page here]

The post Apple Unleashes The World’s First Smartphone With A ‘3 Nm’ Processor (NASDAQ:AAPL) first appeared on seekingalpha.com

New reasons to get excited everyday.



Get the latest tech news delivered right in your mailbox

You may also like

Subscribe
Notify of
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

More in Apple