Wednesday, 26 January 2011

Super high precision wxDateTime

I've been working slowly on the promised program that would show some test patterns for your beloved monitor.
Recently I got stuck on the response time "patterns" for two reasons:
1. wxWidgets that I used for the platform don't exactly support fast rendering. Only method supported is GDI and OpenGL has issues with text.
2. Even deciding on what to use I still had a serious issue of wxWidgets time functions not having the required resolution. The best they manage - with wxDateTime::UNow() - is around 15ms resolution which suffices for 67FPS assuming those FPS match screen refresh intervals.

It doesn't help if I can reach 250000 FPS, but the numbers on those frames remain the same, so I went looking for a better / more precise version.

Turns out this isn't in such a high demand or at least that there aren't many solutions around.
So I just wrote my own.
I used the wxDateTime class which is already precise to the millisecond (in data, not in implementation). Unfortunately for me, but still precise enough since my solution manages quite a bit more. On my particular computer, 1/3339736 seconds precision. That is better than 1 microsecond precision.
Also, my solution is platform specific (windows) since I don't yet have any relevant Linux development VMs. If anyone cares to add cross platform code, I'm all inbox for changes required :)

I give you super high precision timer by courtesy of WTFPL license. Enjoy.

supertimer.h
#ifndef __supertimer_h
#define __supertimer_h

#include "windows.h"
#include "wx/datetime.h"

void initSuperTimer();
wxDateTime getDateTime();

#endif


supertimer.cpp
#include "supertimer.h"

LARGE_INTEGER offsetCounter, perfFreq;
wxDateTime refTime;

void initSuperTimer()
{
//Initializes global variables for use in getDateTime() function
QueryPerformanceFrequency(&perfFreq);
wxDateTime a;
a = wxDateTime::UNow();
while (((refTime = wxDateTime::UNow()) - a).GetMilliseconds() == 0)
; //This loop really hopes that UNow() has a decent resolution, otherwise it will take forever :(
QueryPerformanceCounter(&offsetCounter);
}

wxDateTime getDateTime()
{
//Gets system time accurate to the millisecond
//It could do more, but unfortunately wxDateTime isn't that precise
wxDateTime now, ref;
LARGE_INTEGER pc;
QueryPerformanceCounter(&pc);
pc.QuadPart -= offsetCounter.QuadPart;
pc.QuadPart *= 1000;
pc.QuadPart = pc.QuadPart / perfFreq.QuadPart;
ref = wxDateTime::UNow(); //Get system time for reference
now = refTime + wxTimeSpan(0, 0, 0, pc.QuadPart); //Calculate current time from reference time and time elapsed since then
if ((now - ref).GetMilliseconds() > 125)
{ //If there is more than 125ms difference between calculated and system time, reinitialize
//This also assumes that wxDateTime::UNow() is at least precise to 125ms. If it's not, this
//will constantly reinitialize the globals
initSuperTimer();
return getDateTime();
}
return now;
}

Thursday, 13 January 2011

Dell U2711 Part 2 (Calibration)

I can't believe I just did this.
I borrowed a calibrator yesterday. A datacolor Spyder 3 Pro. So that my nice new monitor would display its colors properly. And also that the colors on the monitor would match those on the TV.

Everything went well. I calibrated the monitor, I calibrated the TV, I calibrated the TV to my HTPC. All the colors looked about the same. The grays on the monitor finally looked correct. The TV and the Monitor matched. But I wanted more...

I wanted to see how my monitor fares and the "Pro" software didn't offer the info I wanted. So for the purpose of this blog, I shelled out a tidy 75€ to buy the Spyder3Elite 4.0 software. Talk about stupid...

Well, since I paid the money, here are the results of 120 nit 6500K calibration:
Gamut:
The red triangle is my monitor, the purple one is Adobe RGB.
A simple calculation shows that my monitor covers about 86% of Adobe RGB gamut. The excessive coverage not defined by Adobe RGB would cover some 92% of the gamut.

Gamma:
Gamma looks OK. It isn't on the target curve, but it doesn't stray far from it either. A minor correction in the profile covers this just fine.

Screen backlight uniformity for 100% white:
The maximum deviance is 10% in the upper right corner.
The graph looks far worse than reality I'm happy to say. I can't say the difference is bothering to me.

The actual contrast ratio has been found at 400:1. This figure is quite disappointing, especially after reading the Anandtech review which suggested I'd have at least 800:1.

After shelling out the money and getting the results I wish I hadn't done so. I was aiming for a total bragging piece. The reviews suggested 95+% Adobe RGB, superior contrast ratios and superb viewing angles. Backlight uniformity also wasn't bad. Instead I only got a good gamut, an average contrast and a pretty poor screen uniformity. I can't say I feel bad about purchasing this monitor, but it sure isn't what it's advertized to be.

No more to say, I'm afraid.

Wednesday, 12 January 2011

Why would NVidia seek x86 licenses now?

I read this article today and it mentioned something very interesting. About how NVidia wanted to bargain the x86 license from intel in this last bout of lawyerfest they had. It turned out so great for NV raking in a tidy $1.5 bn over a few years.

So the result was quite favorable for NV, but they wanted more aut of the deal? If I understand correctly, they got chipsets back, but chipsets won't help them any more. It's only a minor part now that CPUs got all the goodies. Eventually more and more peripherals will migrate from chipset to CPU and even NV must surely realize that.

But what on earth would make them go for x86? Sure, it's a huge market right now, but also a cutthroat one at that! AMD has been struggling for decades against the behemoth that is Intel with more or less success. In fact I only know of one instance that they had a short lead (the original Opteron / Athlon 64). VIA retired to a niche years ago when they found they had no chance competing against those two. Any other attempt (remember Transmeta?) went under as well.

Even getting x86 licenses would cost them billions in R&D to get somewhere around Intel's previous generation CPUs or even the generation before. So why even bother?

Instead I think NV should focus on ARM. They have been doing it and doing it relatively well. Sure, they have high power consumption, but that doesn't go for every market segment. Their Tegra 2 won't find a place in a smartphone, but beefed up it sure could find a place in netbooks and even PCs of tomorrow.

ARM AFAIK doesn't restrict core optimizations. Since NV knows a lot about caches and similar stuff thay could take what they already have and beef it up into a chip to behold! An ARM chip with a decent graphics core and performing better than, say, Zacate, would surely attract plenty of attention, especialy once Windows 8 comes out. If it consumed less than 4W, all the better.

So, NVidia: why even bother with something that may never be competitive? Instead use the knowledge you already have and differentiate yourselves!