For the record, Nvidia currently has three different variants of the GTX 1650. There’s the original GTX 1650 with 4GB of GDDR5 memory, a newer version with 4GB of faster GDDR6 memory, and the GTX 1650 Super, which also has 4GB of GDDR6 memory. The GTX 1650 Ultra is set to be the fourth (yes, fourth) version of Nvidia’s GTX 1650 family, and it’s touted to have 4GB of GDDR6 memory clocked at 12Gbps just like its GDDR6 siblings according to SZ Galaxy’s official product page. The GDDR5 model, in case you somehow forgot, is only clocked at 8Gbps. However, the product specifications for this new type of GTX 1650 graphics card also list its GPU Core as the TU106-125, which is a cut-down version of the TU106-400 GPU inside the RTX 2070. The GTX 1650 Super, meanwhile, uses a GPU known as the TU116-250, while the regular GTX 1650s both use a TU117-300 GPU. The really weird thing about the GTX 1650 Ultra, though, is that this new GPU still only has 896 CUDA cores like the regular GTX 1650, according to SZ Galaxy’s specs, as well as a base clock speed of 1410MHz and a boost clock speed of 1590MHz, which is once again identical to the vanilla GTX 1650. The RTX 2070’s GPU, on the other hand, has 2304 CUDA cores, which checks notes is a heck of a lot more than what’s purported to be inside the GTX 1650 Ultra. It’s a mind-boggling set of contradicting facts if ever I saw one, but before you dismiss it as official internet nonsense, the most likely explanation for all of this is that these new GTX 1650 Ultra GPUs are actually the rejects from failed RTX 2070 yields. When a hardware manufacturer makes their silicon wafers, there’s always a certain percentage of them that aren’t fit for their intended final product. They either fail internal tests or just aren’t up to the required snuff to sell them as the thing they were intended for. This is what’s known as “binning” in the industry, and it happens with CPUs, GPUs and RAM. When products do get binned, it’s pretty common for manufacturers to disable certain features of these bits of silicon in order to sell them on as lesser, cheaper variants so they don’t lose money - and it’s probably what’s happened here with the GTX 1650 Ultra. Alas, this means that the GTX 1650 Ultra probably isn’t secretly an RTX 2070 in disguise, as Nvidia will have almost certainly disabled its respective ray tracing and Tensor cores to bring it down to the same level as their regular GTX 1650 cards - hence the dramatic reduction in CUDA cores. There is one key difference that’s of note, though, and that’s the fact that its TDP has risen from 75W up to 90W. That’s still 10W below the GTX 1650 Super, but it does suggest that the Ultra is perhaps a touch more powerful than the vanilla GDDR6 version of the GTX 1650, simply by virtue of being able to draw more power. As for the rest of its specs, though, it really doesn’t look like there’s anything even vaguely “Ultra” about it to justify the name - which is perhaps why there’s no special “Ultra” logo on the accompanying product image of the card’s box like there is with the GTX 1650 Super. Instead, it seems likely that Nvidia will probably just pass this off as a regular GTX 1650, despite the fact it has a completely different GPU sitting at the heart of it. We won’t know for sure until Nvidia formally announce it. The real question, though, is whether the Ultra has the chops to finally topple the 4GB version of AMD’s stupendous RX 5500 XT, which is still the best graphics card for those on a tight budget. If I ever get my hands on a GTX 1650 Ultra sample, you can be sure I’ll let you know.