Are AMD processors better than the Intel ones for PixInsight? Pleiades Astrophoto PixInsight · Daniel Arenas · ... · 48 · 2679 · 5

D_79 1.43
...
· 
Brian Puhl:
How many cores you have really is the answer.  Even my Biggest WBPP stacks have never exceeded 2 hours, and that was when I unecessarily ran LN.   IMX571 data

i9-12900k, 32GB ram, RTX 3090, all SSDs, but pix runs on my slower SATA drives.   The new Ryzens take the cake on stacking times, but Intel still hangs.    I just ran a full frame 2 panel mosaic, StarX and BlurX all were around 1 to 1.5 mins, longest process was LHE and it took roughly 2 mins.   My Embryo stack was something like 500 frames and it took somewhere around an hour.

Wow, that's great!. I use a laptop as I said and that's why I want to buy a desktop one. I have an Intel i7 11th generation 2.8 GHz, 16 GB RAM and GPU integrated (Intel) and it takes 2 o 3 hours to stack 60 lights in WBPP.
Like
swalkenshaw 0.00
...
· 
·  1 like
I run a Ryzen 5 5600G, with 64GB of RAM and a PCI-e scratch drive.  I'm happy with the performance, including RC-Tools
Like
sfboone61 0.00
...
· 
·  2 likes
I run a Ryzen 9 5950X w/. 64gig RAM and a Nvidia RTX3060 and am very happy with the performance.  The 3060 makes a huge difference with the AI based software such as RC Astro's add-ons.
Like
Scotty_Maxwell 0.00
...
· 
·  2 likes
Jason Coon:
If I were building a machine today, I’d use the 7950x CPU, NVidia 4070 Ti GPU, Gen 4 NVMe system, dual Gen 4 NVMe (RAID 0) data, and 128GB RAM.

*Jason, I agree with almost 100% of what you recommended, as a matter of fact here is the parts list of the PC I built in March this year:

Screenshot 2023-12-10 162651.png

The one discrepancy I have is the 128GB of RAM.  I originally attempted this and could not get the system to post and after much discussion with G. Skill and ASUS support I was summarily told the system in fact would not boot or even post with 128GB of RAM.  At the time there were no 2 X 64GB kits so I had to go with 4 X 32GB.  Now I have built many PCs using all 4 slots before and never had a problem but this configuration would not work.  That's not to say it doesn't now but I would sure double check first.

Also, as an n=1, my results have been spectacular with PixInsight.  When working through Adam Block's Fundamentals and following him along step by step, a WBPP that took him 52 minutes and 59 seconds took me just short of 2 minutes.  I didn't even have to pause the video to wait on PIS.
Edited ...
Like
airscottdenning 1.43
...
· 
·  2 likes
Jeff Reitzel:
Never hurts to get the best you can afford. I see improvement with many PI processes with CUDA enabled. Image alignment comes to mind as a big one. Star removal, deconvolution , and noise reduction as well. My laptop is not new. Ryzen 5950, Nvidia 3050RTX GPU, 32GB RAM and it flys through anything Pixinsight throws at it 
CS,
Jeff

Lots of fast CPU cores (AMD or Intel), huge amounts of fast RAM, and the fastest SSDs you can afford will make WBPP run very fast.  

Nearly all my total wall clock time in PixInsight Image is consumed by preprocessing: calibration, registration, local normalization, and integration (stacking). None of these processes are sped up AT ALL by the GPU, but they are all very sensitive to the number/speed of cores,  the amount of RAM, and the speed of the SSD. 

GPU acceleration in PixInsight is ONLY used in the following third-party plugins: StarNet2, BlurXTerminator, StarXTerminator, and NoiseXTerminator. GPU is completely idle and unused for ALL other computations in PixInsight. Also, only CUDA-enabled GPUs (NVIDIA)

For the past 8 years, GPU acceleration has been hinted by the PI developers as "coming soon."  But there are "many other higher priorities." 

You can get an extremely fast PI workstation (7950x with 32 cores, 128 GB RAM, and fast SSDs) for about $1600. You can spend more to add a GPU, but beware that it will only be used by those four 3rd-party plugins, NOT AT ALL during time-consuming preprocessing..
Like
dhavalbbhatt 2.62
...
· 
·  2 likes
I have been using PI for close to 10 years now and have always built my own computers for PI processing. Over the years, the developers of PI have consistently stressed that the hardware to support PI will have the following characteristics (I don't believe this is in any order of importance, but in my experience, I believe this is how one should prioritize the hardware) - 

1) CPU - Particularly, the more cores your system has, the better PI will perform. For me, I've always built my systems with AMD CPUs given that they have always had more cores than Intel at a particular price point (and then Threadripper changed all that). My current system has AMD Threadripper with 24-cores. It is wonderfully fast for me. I have a friend who has an AMD Threadripper with 36-cores. His is obviously faster than mine. Both him and I use QHY600 for imaging from remote sites and typically 20+ hours on most of the targets. That means hundreds of subs. My system will crunch through those subs with ease.
2) RAM - Again, higher is better. I have 128GB, but 64GB should be fine as well. Clearly, RAM is cheap these days, so why bother?
3) SSDs - Faster read/write helps tremendously. Again, these things are cheap and readily available. M2 is better than SATA, so careful about which SSDs you buy/install. Between RAM and SSDs, I've seen that SSDs will matter more, but you want more RAM in the system just to future proof yourself. Besides, most new systems have some form of SSDs in them or are built with SSDs.
4) Operating System - Remember, PI develops their code using Linux (Kubuntu). They also believe that a Linux distro helps PI run at its fastest. Windows/Apple are about 5% or so off. That is not a lot, so this is not that big of a deal.
5) GPU - This is fairly new. Remember, PI does not use GPU for what it does. However, there are new scripts/functions (especially RC Tools) that leverage the power of a GPU. If you have the above sorted out, this does not impact your processing by a whole lot. In my own testing, before turning on GPU processing BlurXT would take between 20-40s more. Without GPU support the CPU does a lot of the heavy lifting. Without GPU support I believe the only functions that are impacted are those created by Russell Cronan (RC Tools - BlurXT, NoiseXT and StarXT) and StarNet (I don't use that).  If you choose to install a Linux distro, you have to be very particular which distro and which release you can install because CUDA (Nvidia) has very specific requirements. Windows is a bit more friendly in turning on support for CUDA. Apple (the ones with new Silicone chips) has GPU processing turned on out of the box. 

Details of my current system:
ASUS ROG motherboard supporting AMD Threadripper 24-core processor
RAM - 128GB
SSD - 2TB M2.0 version
OS - Ubuntu 22.04 LTS with CUDA 12.3 installed
GPU - Nvidia 4070Ti

If you have any questions about my system, feel free to DM me.

Thanks and good luck!

Dhaval
Edited ...
Like
ShortLobster 0.00
...
· 
·  2 likes
The Pixinsight Benchmark page has performance data for many system builds. It's fairly easy to deduce what factors are significant and what aren't (and they've all been mentioned above: fast processor with as many cores as possible, abundant fast memory, fast drives). Linux is significantly faster than Windows. 

https://pixinsight.com/benchmark/
Like
nickmccollum 0.00
...
· 
·  1 like
I looked at the benchmarks and wasn't sure AMD 7950X or Intel 13900K but I had the opportunity to test and ended up running the same WBPP on both and the 13900K was slightly faster.  It was also slightly cheaper.   Either is probably fine and at this point you're looking at a few percent different.
Like
herman 0.00
...
· 
·  1 like
Nick McCollum:
I looked at the benchmarks and wasn't sure AMD 7950X or Intel 13900K but I had the opportunity to test and ended up running the same WBPP on both and the 13900K was slightly faster.  It was also slightly cheaper.   Either is probably fine and at this point you're looking at a few percent different.

This is interesting and actually what I would expect if both were appropriately cooled (and all other variables like RAM, SSD, swap, and OS were equal) but  I don't see enough posts about the 13900K (or 14900K)... seems the 7950X is more popular between the two but I've not seen enough evidence as to why other than popularity.  Were the two systems you tested "equally" cooled to ensure the most out of each processor?

I'm finally going to build a new system after 10 years (i7-4790 and a GTX-950)!  I'll go with either the 7950X or 14900K once I can find enough evidence for one over the other specifically for PixInsight.  I like that the 7950 is potentially easier to cool and I like that the 14900K uses less power when not maxed out on a job like WBPP.
Like
NelzAstro 2.15
...
· 
·  3 likes
Has anyone mentioned power efficiency here?

Intel may have, what looks to be, superior performance with the current generation (13900K/14900K) but there is a cost and it's a big one. Power consumption and therefore heat dissipation.

Trying to keep the current gen intel silicone running at it's max clocks requires some beefy cooling and even a 360mm AIO water cooler can in multi-core heavy workloads ie. PixInsight  end up thermal throttling, nullifying that extra cash you spent on the 'Top Chip'.

In my experience AMD is the better choice for productivity workloads, their thermal efficiency, power efficiency and core count along with sensible cooling requirements (due to the greater efficiency).

I'm gonna pop my pith helmet on for this next comment.....

Intel for gaming, AMD for when you want to get serious.

I'm running a gaming rig with a 13900K and my editing rig with a 5950X both are on 64GB RAM (DDR5 6400 on the Intel and DDR4 3600 on the AMD) and both run RTX3090s.

The 5950X is running with a PBO overclock @4.6Ghz all core and 5.1Ghz single thread. (Easily achievable on any half decent X570 or B550 motherboard)

13900K is stock with motherboard defaults.

For PixInsight workflows both run around the same processing time for WBPP (+/- 45s for 200 frames) and identical for RCAstro tools (CUDA).

However when running WBPP The intel system pulls approx 300W from the wall while the AMD system pulls approx 200W, if you're in the UK with our utterly savage energy prices, this makes a big difference.
Like
WeberPh 6.62
...
· 
·  2 likes
NelzAstro:
Has anyone mentioned power efficiency here?

Intel may have, what looks to be, superior performance with the current generation (13900K/14900K) but there is a cost and it's a big one. Power consumption and therefore heat dissipation.

Trying to keep the current gen intel silicone running at it's max clocks requires some beefy cooling and even a 360mm AIO water cooler can in multi-core heavy workloads ie. PixInsight  end up thermal throttling, nullifying that extra cash you spent on the 'Top Chip'.

In my experience AMD is the better choice for productivity workloads, their thermal efficiency, power efficiency and core count along with sensible cooling requirements (due to the greater efficiency).

I'm gonna pop my pith helmet on for this next comment.....

Intel for gaming, AMD for when you want to get serious.

I'm running a gaming rig with a 13900K and my editing rig with a 5950X both are on 64GB RAM (DDR5 6400 on the Intel and DDR4 3600 on the AMD) and both run RTX3090s.

The 5950X is running with a PBO overclock @4.6Ghz all core and 5.1Ghz single thread. (Easily achievable on any half decent X570 or B550 motherboard)

13900K is stock with motherboard defaults.

For PixInsight workflows both run around the same processing time for WBPP (+/- 45s for 200 frames) and identical for RCAstro tools (CUDA).

However when running WBPP The intel system pulls approx 300W from the wall while the AMD system pulls approx 200W, if you're in the UK with our utterly savage energy prices, this makes a big difference.


I'd just briefly like to second what you're writing here. In terms of power efficiency AMD is currently a long way ahead of Intel (although it looks as if it's changing with Zen 4 right now).
Gamers Nexus started to compile what they call "Megacharts" listing various metrics for different PC hardware types, one of which is power efficiency (essentially work done per Watt consumed). You can look at it here:
https://gamersnexus.net/megacharts/cpu-power
Make sure to scroll down a bit until you arrive at the efficiency section.
It's quite telling, really, and the reason why I do have my 5950x now.
Like
WeberPh 6.62
...
· 
·  1 like
And by the way, it is in fact in principle possible to use AMD GPUs to massively accelerate the RC tools and other TensorFlow based processes using AMD GPUs, see my post here. I achieved a ~26 times speed increase on my RX 6950 XT.
I'm not gonna lie though, reproducing what I did is not easily achieved, especially when running on Windows. I'd love for TensorFlow to also provide precompiled versions with ROCm support, I just don't get why they're not doing this.
Like
NelzAstro 2.15
...
· 
·  2 likes
Philipp Weber:
Gamers Nexus started to compile what they call "Megacharts" listing various metrics for different PC hardware types, one of which is power efficiency (essentially work done per Watt consumed). You can look at it here:
https://gamersnexus.net/megacharts/cpu-power


@Philipp Weber Thanks for posting that was trying to find that link.

@Daniel Arenas

Hope all this helps 😀
Like
herman 0.00
...
· 
·  1 like
I too like the power efficiency of the 7950X over the 14900K but I am a little spooked on the multiple reports I've heard (including the one in this thread) about using all four memory slots with AM5 boards (I am planning for 128GB so I need all four slots).  Long ago (20 years ago?) I gave up on building AMD-based systems because of frequent memory-caused instability in the boards/chipsets/bios at the time.  I am a little discouraged to see this again as I reconsider AMD after all these years.  I was targeting the ASUS ProArt X670E, which is arguably quite similar to the ASUS ROG Strix variant cited earlier in this thread but not sure now if I want to risk the hassle of using immature DDR5 4-slot support.  Anyone have recent experience with using all 4 slots with AM5 boards?
Like
D_79 1.43
...
· 
NelzAstro:
Has anyone mentioned power efficiency here?

Intel may have, what looks to be, superior performance with the current generation (13900K/14900K) but there is a cost and it's a big one. Power consumption and therefore heat dissipation.

Trying to keep the current gen intel silicone running at it's max clocks requires some beefy cooling and even a 360mm AIO water cooler can in multi-core heavy workloads ie. PixInsight  end up thermal throttling, nullifying that extra cash you spent on the 'Top Chip'.

In my experience AMD is the better choice for productivity workloads, their thermal efficiency, power efficiency and core count along with sensible cooling requirements (due to the greater efficiency).

I'm gonna pop my pith helmet on for this next comment.....

Intel for gaming, AMD for when you want to get serious.

I'm running a gaming rig with a 13900K and my editing rig with a 5950X both are on 64GB RAM (DDR5 6400 on the Intel and DDR4 3600 on the AMD) and both run RTX3090s.

The 5950X is running with a PBO overclock @4.6Ghz all core and 5.1Ghz single thread. (Easily achievable on any half decent X570 or B550 motherboard)

13900K is stock with motherboard defaults.

For PixInsight workflows both run around the same processing time for WBPP (+/- 45s for 200 frames) and identical for RCAstro tools (CUDA).

However when running WBPP The intel system pulls approx 300W from the wall while the AMD system pulls approx 200W, if you're in the UK with our utterly savage energy prices, this makes a big difference.

That's very, very interesting.
I haven't thought about power consumption (and efficiency). Many thanks!
NelzAstro:
Philipp Weber:
Gamers Nexus started to compile what they call "Megacharts" listing various metrics for different PC hardware types, one of which is power efficiency (essentially work done per Watt consumed). You can look at it here:
https://gamersnexus.net/megacharts/cpu-power


@Philipp Weber Thanks for posting that was trying to find that link.

@Daniel Arenas

Hope all this helps 😀

Yes, this information helps!
But when I have something clear about the advantages of using AMD, then there are some comments that indicate something against, and that Intel is better and vice versa. At the moment it seems that AMD, from what I am understanding in this thread, has better management of cores and power consumption.

Thank You!
Like
D_79 1.43
...
· 
Jeff Herman:
I too like the power efficiency of the 7950X over the 14900K but I am a little spooked on the multiple reports I've heard (including the one in this thread) about using all four memory slots with AM5 boards (I am planning for 128GB so I need all four slots).  Long ago (20 years ago?) I gave up on building AMD-based systems because of frequent memory-caused instability in the boards/chipsets/bios at the time.  I am a little discouraged to see this again as I reconsider AMD after all these years.  I was targeting the ASUS ProArt X670E, which is arguably quite similar to the ASUS ROG Strix variant cited earlier in this thread but not sure now if I want to risk the hassle of using immature DDR5 4-slot support.  Anyone have recent experience with using all 4 slots with AM5 boards?

Sorry @Jeff Herman

I think I miss this information. What are the problems about using all four memory slots in AM5 boards?
Because one thing that I want is to buy some components that allow the desktop computer to be useful for 4 or 5 years at least, so in AMD I was looking for AM5 mother boards and in Intel the LGA1700 ones. With RAM i was thinking of buying 64 Gb in two slots in order to update this amount in the future.

Clear Skies!
Like
herman 0.00
...
· 
·  1 like
Daniel Arenas:
I think I miss this information. What are the problems about using all four memory slots in AM5 boards?
Because one thing that I want is to buy some components that allow the desktop computer to be useful for 4 or 5 years at least, so in AMD I was looking for AM5 mother boards and in Intel the LGA1700 ones. With RAM i was thinking of buying 64 Gb in two slots in order to update this amount in the future.

Clear Skies!

See @Scotty Maxwell post above.  I have found many other similar reports on various system-build forums, YouTube, etc.  I think now it's probably more about ensuring you're getting the full speed out of the RAM and, to be fair, is probably more of a DDR5 issue than it is specifically an AM5 issue.  Basically when going to 4 slots instead of 2 you are limiting how much you can overclock the RAM past or even reliably reaching the stock 5200Mhz (for 7950X) or 5600Mhz (for 13900K/14900K).  Some of the boards will train to much lower than the stock speed (3600Mhz) and then you have to go in and tweak just to bring it back to the stock speed.  My read of all the comments on this is Intel's memory management is a little more reliable and requires less tweaking (none maybe?) to operate at 5600Mhz when using all 4 slots.    Anyway, if one uses 2 slots instead of 4 then all is much easier to deal with.  I might just live with 96GB (2x48GB) instead of 128GB to ensure reliability.
Like
WeberPh 6.62
...
· 
Jeff Herman:
Daniel Arenas:
I think I miss this information. What are the problems about using all four memory slots in AM5 boards?
Because one thing that I want is to buy some components that allow the desktop computer to be useful for 4 or 5 years at least, so in AMD I was looking for AM5 mother boards and in Intel the LGA1700 ones. With RAM i was thinking of buying 64 Gb in two slots in order to update this amount in the future.

Clear Skies!

See @Scotty Maxwell post above.  I have found many other similar reports on various system-build forums, YouTube, etc.  I think now it's probably more about ensuring you're getting the full speed out of the RAM and, to be fair, is probably more of a DDR5 issue than it is specifically an AM5 issue.  Basically when going to 4 slots instead of 2 you are limiting how much you can overclock the RAM past or even reliably reaching the stock 5200Mhz (for 7950X) or 5600Mhz (for 13900K/14900K).  Some of the boards will train to much lower than the stock speed (3600Mhz) and then you have to go in and tweak just to bring it back to the stock speed.  My read of all the comments on this is Intel's memory management is a little more reliable and requires less tweaking (none maybe?) to operate at 5600Mhz when using all 4 slots.    Anyway, if one uses 2 slots instead of 4 then all is much easier to deal with.  I might just live with 96GB (2x48GB) instead of 128GB to ensure reliability.

Can you maybe state what impact a loss of memory speed would have? I always thought that at least for gaming it's not really that relevant if you run 800 MT/s faster or slower.
Like
DarkStar 18.84
...
· 
·  1 like
Hello all,

here my two cents, since I went through this painful decision process quite a while and I can report first hand.

1. Memory
It is not a good idea to go beyond 96 GB RAM (2x48GB) at the moment. Stay with 2 modules only. When using 4 modules, the bandwidth collapses dramatically. It eats up more than the gains are. AMD even tends to get unstable.
Even when stacking more than 100 full frame images, I even do not use half of the physical memory (64GB).

2. Cooling
I am using a 13900KS at the moment with a 4070TI. I tried to craft a high performance cooling with the aid of company  "Alphacool". They are specialists for any liquid cooling in high end. It turned out after a couple of months (even with special crafted CPU water cooling headers), that simply the CPU contact surface is too small to transfer the heat to the water. I have added temp sensors in the water circuit in front and after the CPU: In 37° and out 42°. As radiator I am using an external cooling tower "Eiswand", which can easily emit more than 300W heat.
Unfortunately the 13900KS is ALWAYS throttling during stacking. The water temp rises only marginally. In idle mode the the in-out temp is even identical:
image.png

3. Disks
I am using 3 Samsung 990 Pro. Also RAM disk and having the catalogs on dedicated separate additional SATA SSD disks. Actually PI is very inefficient in disk access. The disk queues are always close to zero (queue lengths is the metric to use here in performance monitor) and CPU is not fully used. It is evident that there are many more bottlenecks, than disk and CPU.

4. CUDA
With the 4070TI I do full frame BXT in approx 30s and NXT under 10s. So CUDA is definitely bringing a lot of joy, though it is a pain to configure first time.

5. Architecture
Windows and PI have problems to distinguish between Intel's efficiency and performance cores at the moment. So it can happen, that high load tasks are falsely assigned to the wrong core. Here AMD is much more straight forward. But it can be expected that this problem is addressed soon.

6. Power
Though Intel has something in common with an electric heater (and could be used for that purpose), it is astonishing low in power consumption in idle mode. My system runs at 40-60W in idle. In daily use I restrict it to 45W via Intel Extreme Tuning Utility which limits the max power. While stacking in full power it easily burns 280W as heat, which has to be shuffled away by the cooling.

image.png

8. Airflow / chassis fans
Of course, all components get hot. Multiple fans are required to keep the inside of the case well cooled.


I hope I could give some more aspects.

CS
Rüdiger
Edited ...
Like
herman 0.00
...
· 
·  1 like
This is great information.  Thank you for sharing.  Would you choose the 7950X for it's lower power requirements if you had to do it all over?  Or maybe even the 7950X3D since it's apparently even more efficient with only a slight performance penalty.

It's a shame the 13900K can't be cooled sufficiently during stacking because I do like that it's more efficient the rest of the time.  I think the 14900K is slightly more efficient overall but not sure the difference is enough.
Like
DarkStar 18.84
...
· 
·  2 likes
Jeff Herman:
This is great information.  Thank you for sharing.  Would you choose the 7950X for it's lower power requirements if you had to do it all over?  Or maybe even the 7950X3D since it's apparently even more efficient with only a slight performance penalty.

It's a shame the 13900K can't be cooled sufficiently during stacking because I do like that it's more efficient the rest of the time.  I think the 14900K is slightly more efficient overall but not sure the difference is enough.

Hi Jeff,

I have just read your post on CN

Both, AMD and Intel have issues with the heat spreader of their CPU. Both are curved: AMD is concave, Intel convex - but both have huge production variations. That is the reason why Alphacool provides curved water cooling heads for each brand. I had also issues with double curved coolers and had to replace it. They had an elevation in the center instead of a curved surface. Consequence: No good contact in the middle and overheating. But if you have bad luck you get a poor CPU also. Please see the images below. There you the sink between center of the cooler and edge. The light is shining through. As consequence no perfect contact. The imprint is always a good evidence. Also some hard core guys remove the heat spreader from the CPU - but this is very risky.

XEONs and Threadripper do not suffer from that at all, since they are much bigger and provide a huge plan contact surface. They are easy to cool.

To be honest, the performance of AMD and Intel are quite on the same level. The difference is only a few percent and depending on the workload. This differences may reflect only in a few seconds for WBPP - if at all. In my opinion it is more worth to spent the money on good RAM (low latency and high stability) and a good board. Usually the the Intel memory controller are a bit more stable. AMD is very bitchy when it comes to RAM.

Therefore I would minimally tend towards an Intel. But this is really more a matter of personal taste. A serious cooling is a must for both.
I would recommend to stay with 2 modules of DDR5 e.g. 2x 48GB but get high speed ones. This brings an advantage everywhere since fast memory transfers speeds up everything.

AMD and Intel are both working on getting more efficient, because the last generations were ridiculous designs. Brute forcing performance only to have a tiny marketing advantage. I think they became aware of that.

I did also uncountable benchmarks with PI, since the machine is optimized in every component to boost PI but I noticed that PI needs a dramatic hardware change to generate a significant and repeatable (!) performance advantage. Neither overclocking, nor different BIOS versions orsettings, nor different memory timings have resulted in real significant performance advantages which pays off in WBPP, but they were impacting stability. In general, PI seems not use hardware efficiently and there for you need brute force to speed it up.

My very personal conclusion:

1. Get quality components
2. Operate them in their specifications
3. Take care about proper cooling for ALL components. DDR5 gets very hot.
4. CUDA is the biggest fun and a good invest. The pain getting it running is worth the effort.
5. Prefer workstation boards if you are not a gamer
6. If using a WS board with server chipset you can use x TB of RAM without any impact

Here the PI Benchmark of my system: https://pixinsight.com/benchmark/benchmark-report.php?sn=P57802RY83ISTCTK3W784IEOTUNO0618

If you have more questions you may also PM me.

CS
Rüdiger


Core1.jpg

Abdruck.jpg
Like
dk94041 0.00
...
· 
·  1 like
Jeff Reitzel:
I run Pixinsight on a laptop as well without issue. I think they are stressing the multi thread capabilities of the AMD Threadripper processor. Intel has no equivalent to that processor and it is extremely expensive. Intel equivalents to the Ryzen 5950 or 7950 line will be fine. The best boost to Pixinsight performance comes from having a good Nvidia GPU so you can enable CUDA parallel processing. Plenty of help videos to walk you through how to do that. It can't be done with anything but Nvidia GPUs as far as I know. 
CS,
Jeff

Ditto this comment, you want the Nvidia GPU. I bought an HP gaming laptop for $800 delivered, and the performance is incredible.  BlurXterminator used to take 10-12 minutes on my Intel based MacBook Pro, now a little over 1 min (to apply to an APS-C image).  WBPP also much faster.  It's very difficult to get the best settings even with live preview mode & small preview areas when the processes take that long to update.

Also, PixInsight publishes the benchmark data from users, so you can get a good estimate of your exact performance by checking for the system you plan to buy here: http://pixinsight.com/benchmark/
Edited ...
Like
T3kko 0.00
...
· 
·  2 likes
Jeff Herman:
Daniel Arenas:
I think I miss this information. What are the problems about using all four memory slots in AM5 boards?
Because one thing that I want is to buy some components that allow the desktop computer to be useful for 4 or 5 years at least, so in AMD I was looking for AM5 mother boards and in Intel the LGA1700 ones. With RAM i was thinking of buying 64 Gb in two slots in order to update this amount in the future.

Clear Skies!

See @Scotty Maxwell post above.  I have found many other similar reports on various system-build forums, YouTube, etc.  I think now it's probably more about ensuring you're getting the full speed out of the RAM and, to be fair, is probably more of a DDR5 issue than it is specifically an AM5 issue.  Basically when going to 4 slots instead of 2 you are limiting how much you can overclock the RAM past or even reliably reaching the stock 5200Mhz (for 7950X) or 5600Mhz (for 13900K/14900K).  Some of the boards will train to much lower than the stock speed (3600Mhz) and then you have to go in and tweak just to bring it back to the stock speed.  My read of all the comments on this is Intel's memory management is a little more reliable and requires less tweaking (none maybe?) to operate at 5600Mhz when using all 4 slots.    Anyway, if one uses 2 slots instead of 4 then all is much easier to deal with.  I might just live with 96GB (2x48GB) instead of 128GB to ensure reliability.

Recent BIOS updates have started solving a lot of these issues. Asus now lists compatibility with 2 or 3 128GB kits running at 5600MHz. And since the most recent update they also list a 192GB 5200MHz kit as compatible as well.
Like
D_79 1.43
...
· 
·  1 like
Ruediger:
Jeff Herman:
This is great information.  Thank you for sharing.  Would you choose the 7950X for it's lower power requirements if you had to do it all over?  Or maybe even the 7950X3D since it's apparently even more efficient with only a slight performance penalty.

It's a shame the 13900K can't be cooled sufficiently during stacking because I do like that it's more efficient the rest of the time.  I think the 14900K is slightly more efficient overall but not sure the difference is enough.

Hi Jeff,

I have just read your post on CN

Both, AMD and Intel have issues with the heat spreader of their CPU. Both are curved: AMD is concave, Intel convex - but both have huge production variations. That is the reason why Alphacool provides curved water cooling heads for each brand. I had also issues with double curved coolers and had to replace it. They had an elevation in the center instead of a curved surface. Consequence: No good contact in the middle and overheating. But if you have bad luck you get a poor CPU also. Please see the images below. There you the sink between center of the cooler and edge. The light is shining through. As consequence no perfect contact. The imprint is always a good evidence. Also some hard core guys remove the heat spreader from the CPU - but this is very risky.

XEONs and Threadripper do not suffer from that at all, since they are much bigger and provide a huge plan contact surface. They are easy to cool.

To be honest, the performance of AMD and Intel are quite on the same level. The difference is only a few percent and depending on the workload. This differences may reflect only in a few seconds for WBPP - if at all. In my opinion it is more worth to spent the money on good RAM (low latency and high stability) and a good board. Usually the the Intel memory controller are a bit more stable. AMD is very bitchy when it comes to RAM.

Therefore I would minimally tend towards an Intel. But this is really more a matter of personal taste. A serious cooling is a must for both.
I would recommend to stay with 2 modules of DDR5 e.g. 2x 48GB but get high speed ones. This brings an advantage everywhere since fast memory transfers speeds up everything.

AMD and Intel are both working on getting more efficient, because the last generations were ridiculous designs. Brute forcing performance only to have a tiny marketing advantage. I think they became aware of that.

I did also uncountable benchmarks with PI, since the machine is optimized in every component to boost PI but I noticed that PI needs a dramatic hardware change to generate a significant and repeatable (!) performance advantage. Neither overclocking, nor different BIOS versions orsettings, nor different memory timings have resulted in real significant performance advantages which pays off in WBPP, but they were impacting stability. In general, PI seems not use hardware efficiently and there for you need brute force to speed it up.

My very personal conclusion:

1. Get quality components
2. Operate them in their specifications
3. Take care about proper cooling for ALL components. DDR5 gets very hot.
4. CUDA is the biggest fun and a good invest. The pain getting it running is worth the effort.
5. Prefer workstation boards if you are not a gamer
6. If using a WS board with server chipset you can use x TB of RAM without any impact

Here the PI Benchmark of my system: https://pixinsight.com/benchmark/benchmark-report.php?sn=P57802RY83ISTCTK3W784IEOTUNO0618

If you have more questions you may also PM me.

CS
Rüdiger

Thank you so much.  Your contribución is very appreciated. 
And I think it’s very interesting what you have explained. It helps me a lot.
Like
 
Register or login to create to post a reply.