• 0 Posts
  • 117 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • x86/x64 code is pretty much 100% compatible between AMD and Intel. On the GPU side it’s not that simple but Sony would’ve “just” had to port over their GNM(X) graphics APIs to Intel (Arc, presumably). Just like most PC games work completely fine and in the same way between Nvidia, AMD and Intel GPUs. But they have to do that anyway to some extent even with newer GPU architectures from AMD, because PS4’s GCN isn’t 1:1 compatible to PS5’s RDNA2 on an architectural level, and the PS4’s Jaguar CPU isn’t even close to PS5’s Zen 2.

    Other than that, you’re right. Sony wouldn’t switch to Intel unless they got a way better chip and/or way better deal, and I don’t think Intel was ready with a competitive GPU architecture back when the PS5’s specifications were set in stone.






  • iOS used to be able to handle background tasks via very specific APIs, that started with iOS 4 and I believe this started to be reworked with iOS 7 and it behaves similar to Android in that background apps are suspended by default. According to an old video by Android Authority, iOS seems to be able to compress suspended apps down to a smaller memory footprint than Android. Both OS allow background services to run, but to my understanding iOS keeps way more control over that compared to Android (although vendor-specific battery saving features probably attempt to do something similar on Android). So in that way, it’s still more specific/selective on iOS compared to Android. Prompt (iOS SSH app) uses the location service in the background to prevent iOS eventually killing active connections for example. Still, iOS seems to handle app suspension more efficient than Android (and yes, Android actually suspends background apps as well).

    I’m with you that they could’ve likely bumped all soon-to-be-released iPhone 16 models to 16 GB, but rumors only have them at 8 GB. Makes “sense”, as even the iPad Pro and MacBook Air still only come with 8 GB in their lowest configurations.

    But I don’t buy that them releasing the iPhone 15 with only 6 GB of RAM was a malicious attempt at limiting AI features. Seeing how unfinished their AI stuff is even in their latest beta releases, they were/are playing catchup. It was bad foresight and there are often talks about how internal teams at Apple are very secretive about projects in development, I wouldn’t be surprised if the team developing the iPhone 15 knew pretty much nothing about the software plans with Apple Intelligence. It’s still a very valid point of criticism though obviously, seeing as you could still buy an iPhone 15 to this day (it’s still the “latest and greatest” non-Pro iPhone before the iPhone 16 releases in a few weeks) and you won’t get the by far biggest feature of a software update releasing just weeks/months after your purchase. This is a huge step backwards in terms of software support, as iPhones normally get pretty much all major new software features for at least 3 years, and still most features of even newer OS releases (recent devices have seen support for major updates for 6+ years, the iPhone XS will get its 7th major iOS release with iOS 18).

    I’m not saying “cut that poor multi-trillion dollar company a break”, I’m just saying that not supporting the iPhone 15 for Apple Intelligence probably isn’t a result of malicious acting, but rather bad foresight and poor internal communication. Limiting the soon-to-be-released iPhone 16 models to 8 GB on the other hand seems very greedy, especially with them trying to run as many of their AI models on-device.


  • I’m not generally arguing it’s not a big deal. I’m actually saying the regular M chips should be upgraded to M “Pro” levels of display support. But beyond two external displays, yes, I’m arguing it’s not a big deal, simply because >99% of users don’t want to use more than two external displays (no matter the resolution). Even if I had 6 old displays lying around I would hardly use more than two of them for a single computer. And as long as I’m not replacing all 6 displays with 6 new displays it doesn’t make a difference in terms of e-waste. On the contrary I’d use way more energy driving 6 displays simultaneously.

    I’m 100% with you that MST should be supported, but not because driving six displays (per stream) is something I expect many people to do, but because existing docking solutions often use MST to provide multiple (2) DisplayPort outputs. My workplace has seats with a USB-C docking station connected to two WQHD displays via MST, and they’d all need replacing should we ever switch to MacBooks.

    And sure, they should bring back proper font rendering on lower resolution displays. I personally haven’t found it to be too bad, but better would be … better, obviously. And as it already was a feature many moons ago, it’s kind of a no-brainer.


  • I’m not sure if you’re agreeing or disagreeing with me here. Either way, hardware has a substantially longer turnaround time compared to software. The iPhone 15 would’ve been in development years before release (I’m assuming they’re developing multiple generations in parallel, which is very likely the case) and keep in mind that the internals are basically identical to the iPhone 14 Pro, featuring the same SoC.

    AI and maybe AAA games like Resident Evil aside, 6 GB seems to work very well on iPhones. If I had a Pixel 6/7/8 Pro with 12 GB and an iPhone 12/13/14 Pro (or 15) with 6 GB, I likely wouldn’t notice the difference unless I specifically counted the number of recent applications I could reopen without them reloading. 6 GB keeps plenty of recent apps in memory on iOS.

    But I’m not sure going with 8 GB in the new models knowing that AI is a thing and the minimum requirement for their first series of models is 8 GB is too reassuring. I’m sure these devices will get 5-8 years of software updates, but new AI features might be reduced or not present at all on these models then.

    When talking about “AI” in this context I’m talking about everything new under the “Apple Intelligence” umbrella, like LLMs and image generators. They’ve done what you’d call “AI” nowadays for many years on their devices, like photo analysis, computational photography, voice isolation, “Siri” recommendations etc.


  • I think they got caught with their pants down when everybody started doing AI and they were like “hey, we have this cool VR headset”. Otherwise they would’ve at least prepared the regular iPhone 15 (6 GB) to be ready for Apple Intelligence. Every (Apple Silicon) device with 8 GB or more get Apple Intelligence, so M1 iPads from 2021 get it as well for example, even though the M1’s NPU is much weaker than some of the NPUs in unsupported devices with less RAM.

    They are launching their AI (or at least everything under the “Apple Intelligence” umbrella) with iOS 18.1 which won’t even release with the launch of the new iPhones, and it’ll be US only (or at least English only) with several of the features announced at WWDC still missing/coming later and it’s unclear how they proceed in the EU.


  • Yup, while the current iPhone 15 Pro is the only model which has 8 GB of RAM, with the regular iPhone 15 having 6 GB. All iPhone 16 models (launching next month) will still only have 8 GB according to rumors, which happens to be the bare minimum required to run Apple Intelligence.

    Giving the new models only 8 GB seems a bit shortsighted and will likely mean that more complex AI models in future iOS versions won’t run on these devices. It could also mean that these devices won’t be able to keep a lot of apps ready in the background if running an AI model in-between.

    16 GB is proper future-proofing on Google’s part (unless they lock new software features behind newer models anyway down the road), and Apple will likely only gradually increase memory on their devices.


  • What you’re describing as “DisplayPort alt mode” is DisplayPort Multi-Stream Transport (MST). Alt mode is the ability to pass native DisplayPort stream(s) via USB-C, which all M chip Macs are capable of. MST is indeed unsupported by M chip hardware, and it’s not supported in macOS either way - even the Intel Macs don’t support it even though the hardware is capable of it.

    MST is nice for a dual WQHD setup or something (or dual UHD@60 with DisplayPort 1.4), but attempt to drive multiple (very) high resolution and refresh rate displays and you’ll be starved for bandwidth very quickly. Daisy-chaining 6 displays might technically be possible with MST, but each of them would need to be set to a fairly low resolution for today’s standards. Macs that support more than one external display can support two independent/full DisplayPort 1.4 signals per Thunderbolt port (as per the Thunderbolt 4 spec), so with a proper Thunderbolt hub you can connect two high resolution displays via one port no problem.

    I agree that even base M chips should support at least 3 simultaneous displays (one internal and two external, or 3 external in clamshell mode), and they should add MST support for the convenience to be able to connect to USB-C hubs using MST with two (lower-resolution) monitors, and support proper sub-pixel font anti-aliasing on these low-DPI displays (which macOS was perfectly capable of in the past, but they removed it). Just for the convenience of being able to use any random hub you stumble across and it “just works”, not because it’s necessarily ideal.

    But your comparison is blown way out of proportion. “Max” Macs support the internal display at full resolution and refresh rate (120 Hz), 3 external 6K 60Hz displays and an additional display via HDMI (4K 144 Hz on recent models). Whatever bandwidth is left per display when daisy-chaining 6 displays to a single Thunderbolt port on a Windows machine, it won’t be anywhere near enough to drive all of them at these resolutions.








  • You can definitely get fairly accurate power draw readings from these chips in macOS, even with Apple’s own debugging tools. If anything, it’s harder (or at least more confusing) to get accurate readings for AMD chips (TDP != power draw).

    Also, the TDP the manufacturer states in the spec sheet pretty much doesn’t mean anything these days. These chips will be allowed to draw different amounts of power for different durations under different conditions. This is especially true for the AMD parts, as they run in a lot of different laptops with different power and cooling capabilities. But even for Apple’s M chips there are different configurations: a MacBook Air only has passive cooling while the same chip in a MacBook Pro can have active cooling, which will impact maximum allowed (sustained) power draw and with that, performance.

    You also link to CPU Monkey, a website I wouldn’t use for anything but very rough estimates, because their seemingly random collection of benchmarks are likely just taken/stolen from somewhere else (I doubt they benchmarked every single CPU they list themselves) and it’s unclear with what power limits and thermal constraints these benchmarks were run.

    Even with all the data, it’s still hard to make a 100 % accurate comparison. For example, the efficiency curves of these CPUs is likely quite a bit different. The M3 might achieve its highest performance/watt at 12 watts, while the Ryzen’s best performance/watt might be at 15 watts (these numbers are just an example). So, do you compare at 12 or 15 watts then?

    And yes, there absolutely can be situations where the AMD CPU draws 50% or even 100% (or more) more power under load, and depending on the configuration of the chip in a specific system, the opposite can be the case as well. This in itself doesn’t tell you much about potential power efficiency though.

    EDIT: Also, comparing the Ryzen 9 part with 12 cores to the smallest M2 doesn’t make any sense. You’d much more likely compare it to the M2 Max which has 12 cores as well (and again, trying to match the TDP in the spec sheet doesn’t make any sense, as especially for AMD, TDP isn’t even close to actual power draw under load - PPT is at least a somewhat better number here).

    I also get that you’re trying to match the process node as closely as possible and TSMC N4 is “just” an improed variant of TSMC N5P, but it still differs. Also, the M2 was released two years earlier than AMD’s AI 300 series, so you ignore two years of architecture improvements which happen regardless of the process node, just look at the (supposed) performance and efficiency improvements from desktop Zen 4 to Zen 5 on the same.

    Maybe the new AMD chips are better in many ways even compared to more recent Apple chips, but the comparison you are trying to make is so deeply flawed on so many levels that it’s completely useless and it doesn’t prove anything whatsoever.