display too bright Results

Sponsored Links:

I just installed Windows 7 Pro on an HP Pavillion with an HP w2007 20" monitor. I had no color issues initially and everything worked fine. The first time I ran Windows Update, it included an optional update called "HP- Display - HP w2007 Wide LCD Monitor". That update "failed" to install and when I rebooted, my screen had a very "neon" look to it - my desktop background, a picture of a tree with a bunch of red leaves, was very washed out and most of the leaves were now bright red with no details visible. This is evident everywhere, not just my desktop background (another example...the little green circles next to contact names in Gmail chat are very bright green, more so than usual). Everything is too bright. I have tried the following fixes:

-Reinstalling the HP display update manually (it worked this time) and rebooting - didn't work.
-Resetting my monitor to the factory defaults - didn't work.
-Downloading and installing the latest HP driver for this monitor - didn't work.
-Going through: "Control panel -> Appearance and Personalization -> Display -> Calibrate Color" and going through a color calibration wizard (believe that's new to Windows 7?). Also didn't work.

I have dual-boot to Ubuntu set up on this machine and I believe the colors in Linux are still fine, so I think it's a problem with Windows and not the monitor. Since it was working fine before, I would rather try to get it back to whatever was working, rather than manually tuning the brightness/contrast of my monitor. Any ideas?


I'm using a Fujitsu S6520 that I recently upgraded to Windows 7 Professional (32 bit). The fujitsu comes with 8 preset screen backlight levels that can be adjusted via the fn key on the keyboard. Prior to this, I was using Windows XP on the same computer, and I used to set the backlight at level 3. After upgrading to Windows 7, even level 2 is too bright (it is slightly higher than level 3 on windows xp even), and level 1 is too dark. The jump is drastic. Oddly, when I am in BIOS mode, the backlight levels work fine. The jumps are gradual, and like how it was in Windows XP, level 3 is a comfortable level.

I notice that I can control the backlight via Windows Mobility Centre. But even there, the levels along the slider are preset. The lowest backlight is 1, the 2nd is 15, 3rd is 29 and so on. If there is a way that I could reduce backlight level 2 on my computer to around 10 or 11 in Windows Mobility Centre, that would be perfect. Is there any way I can do this?

I desperately need a solution. My display drivers are up to date.

Thank you.

Edited by Phil Rabichow on 09-Aug-01 19:36.I'm not sure what I did. All I did was change the display to start with controls, rather than the main index. Now my style sheet is beige, instead of bright alternating colors and the print is so tiny I can't see what I'm typing. How can I change back?

OK for the colors. Got them back. But large is still way too small. There must be someone else whose vision isn't eagle like.

ATX Aerocool VX-R case
Gigabyte EP43-UD3L motherboard
Gigabyte Radeon HD-4550
1TB Western Digital RE3 HDD
630W Rosewill Green series psu
2Gb (2x1gb) DDR2 Crucial Ballistix Tracer Ram @1066
2 HP 1260i dvd burners
Intel Core 2 Quad Q8400 2.66Ghz

im trying to install Windows 7 Enterprise x64. disc boots correctly but hangs before the language selection screen for about 5 minutes. i click next, then install now, and it "loads setup" for about 10 minutes. i select my drive and attempt to install. it spends about half a second on copying files and stalls for a minute or two, then proceeds to expanding files which stays at 0% for a few minutes then the % starts to climb. it crashes at random intervals during the expanding files phase in one of 3 ways:
system lockup where everything sticks and it cant move
a failure in which the screen flashes, and looks somewhat tiled
or a crash where parts of the displaay turn a bright red.

i installed this onto my laptop with the same disc (has a volume license) so the disc is fine.

no luck with removing just one stick of RAM, ive tried using either stick in varied ports.

i thought the GFX card was running too hot, so i switched it out, and it still failed

at first it was the hard drive, then i RMAed it and it still fails on the new one (first drive had errors, 2nd one did not)

tried using different disc drives

tried switching cables

im at my wits end with this PC, XP installed perfectly fine on it, i dont have a vista disc, but seven freakin hates the machine.
any suggestions are welcome.


I did a clean install of Windows 7 64-Bit a few days ago and since then i have been having issues with the screen brightness.

When i turn the laptop on the default screen brightness on the 'Balanced' option is about halfway up, however when i try and move this up on the screen brightness bar it automatically moves to the bottom. The bar IS present as is the tab to move but everytime i try this it just moves automatically back down. Also the hotkeys (used with the FN button) have not worked at all on Windows 7 RC and now the 64-bit version.

I have seen a fix to this which indicates going into BIOS when logging on, however on my HP 6735s machine it has its own version of the BIOS menu and i cant find the screen brightness adjustment anywhere!

If anyone else has a HP and knows how to access this or even better if anyone knows how to fix this issue it would be greatly appreciated!

Cheers, Ant.

PS: i am a relative noob so if anyone does know lamens terms would be very much appreciated too lol! Cheers guys!

Hi I have been looking for a Media player that will play my files and have tried many...perhaps that is the problem, because despite uninstalling all tried programs a videoplayer such as Gom (or any player) displays a very dark picture requiring me to set brightness to max....

I have had a very similar problem before but found the level control in ffdshow had somehow been set low giving a dark picture....I dont seem to be able to get the Video icon of FFDSHOW to display in the taskbar so I can click on it to check to see if levels are low...I have run Mplayer that comes as a standalone and also as comes in K-lite Codec Pack.

anyone got any ideas

thanks fos

hi guys, i'm having trouble adjusting my screen brightness on my TV when it's connected via a HDMI cable to my laptop. If i try to increase the brightness through my laptop (VAIO VGN-SR) it does so only on my laptop display, while my TV remains the same. Instead if i try to do the opposite, adjusting the brightness through my TV (Panasonic TX-L32D25E) it doesn't change anything whatsoever. The thing is everytime i watch a movie through my laptop the screen brightness on my TV is too dark and i can't adjust it, however if i watch the other normal channels i can adjust the screen brightness without any problems. Does anybody know how i can increase the brightness on my TV when connected to a laptop? Is it an issue regarding the computer rather than the output device?
thank you

Hi All,

I have a problem with my On Screen Display in that, i.e.when i tap the wireless card button on /off or press 2nd function key and F8 to mute/unmute, i no longer get the small OSD that shows quickly over the top of whatever is open to notify you of your hardware/applications controls status.

This also happens with all applications connected with the 2nd function key & top row function keys when pressed , plus the up/down arrows keys (for volume) and the left/right keys too (screen brightness).

I would be most grateful for any advice on this.

Kind Regards

I've only recently discovered the power of this little field code nugget: {MACROBUTTON NoMacro [Name]} for my templates. For those unfamiliar, what it does is display a field called [Name]. When the user clicks the field and begins typing, the field vanishes and only the user text remains. This is an awfully handy way to get people to put the right information in the right spot.

I want to use this field code to include a lot of discriptive information. For example: "In this paragraph, briefly summarize your career prior to obtaining your current position."

Here's the problem: when I use a lot of text, or put it in a narrow column, Word gives me the error message "Display text can not span more than one line!" Well, why the heck not? Better yet, how can include this kind of information and still make it easy for the end user (who's not too bright, I should point out) to put the right info in the right spot?


In a fast paced world, three years after Windows 7, Microsoft’s upcoming successor OS, Windows 8 remains a hard sell. Does that mean it is not worthy of the buzz and hype?

Browse a tech magazine lately? Check out a news site about technology? Chances are, you will read something about Windows 8. Just two weeks ago, Microsoft released the Consumer Preview for Windows 8. It hasn’t even hit store shelves yet, and people are already complaining. This is nothing new in tech circles: Everyone is resistant to change. Sometimes, that resistance to change can be helpful, and even good feedback for developers. Other times, it can result in a shouting match that just remains unwinnable. But like many things, thinking in absolutes is often deconstructive, and seldom objective. Business men and women will judge Windows 8 with business acumen; savoring each bit of financial data and sales indicators to prove a point about the new system. Decision-makers in IT circles will look at security and reliability before weighing in with a more structured cost-benefit analysis that deals in infrastructure. Home users are likely to place more value on aesthetics, performance, and ease-of-use as major factors in the upgrade model.

It is the middle of the month: March 15, 2012 to be precise. It is hard to believe that already three years have gone by since the release of Windows 7. Many IT business people, including server administrators, are just starting to become acclimated with the Windows 7 client environment, its off-shoot productivity software, and the Windows Server 2008 family of products, including Windows Server 2008 R2. In one worldview, short and steady wins the race. While more tech savvy companies clearly saw the benefit of migrating quickly upon release, many SMBs, mid-range companies, and home users remain in a Windows XP limbo – either due to the economic mess that most of the world is dealing with, budgetary constraints, or simply a lack of knowledge about how to port all of their important data over to a Windows 7-based network. But as time has gone on, these groups are a minority, for as much as is known. While much of the third world may still be using Windows XP, and even older systems, it is difficult for that data to be chomped up and read by skeptics and true-believers. In agrarian, rural, and largely undeveloped lands, Internet access still remains a commodity that is seldom traded, and where mobile phone companies continue to make inroads.

Back here in the west, the difference is noticeable in how a company conducts its business, especially when you walk into one running Windows XP and Server 2003. It is not uncommon to see pending Windows Updates on every workstation, versus an up-to-date Windows 7 network. If the IT tasks are outsourced, how that time is spent, and for what purpose, will likely face scrutiny and prioritization. For instance, the administration of an important database may take precedence over the application of client operating system updates. Many system administrators may simply ignore, or be unaware of, the capability of domain controllers and file servers to push out updates across the internal network using WSUS. In many offices, however, you will be likely to find a hybrid network. With a lack of EOL policy and strategy, many businesses end up with certain departments stuck between Windows XP and Windows 7, and that difference takes place when they purchase new hardware – not due to a timetable, but out of necessity. A hybrid network of these systems is not exactly the best medicine for either a business or group of home users who rely on their Windows computer systems day-to-day activities, but it may be better than nothing.

A Trip to Seattle: Home to 90’s Alternative Music, Starbucks Coffee, and Microsoft
On April 1, 2011, I received the Microsoft MVP award for Windows Expert – Consumer. It was a real treat to know that Microsoft had recognized my contributions in the form of setting up forum websites and participating in them. I was certainly very thankful for the award, and presumably happy to know that I could continue to do what I do best, as that is why I received it. I wasn’t the first to be recognized by Microsoft for my contributions to my own website: Ross Cameron (handle: kemical) became one of our first Microsoft MVP’s. One of our former members, Greg (handle: cybercore), had contributed thousands of helpful posts on Windows7Forums.com and was nominated. As time went by, we were fortunate enough to see other MVP’s join our website, including Shyam (handle: Captain Jack), Pat Cooke (handle: patcooke), Bill Bright (handle: Digerati), and Ken Johnston (handle: zigzag3143). These people are experts in their field and genuinely reflect an attitude of altruism towards people. Such traits are hard to find, especially over the Internet, and in a field that is driven by individual competitiveness that forces group cohesion as a necessity. I started communicating with one MVP as a result of a disagreement, but have since gained an enormous amount of respect for her: Corrine Chorney, the owner of SecurityGarden. When I made a video that contained an error or two, about ESET Smart Security, I was suddenly contacted by a fellow MVP: Aryeh Goretsky. These types of people live and breathe technology, and thus, even having a brief e-mail exchange can be a breath of fresh air. It becomes recognizable and clear to me that Microsoft’s selection process and choices for those who receive this award is hardly based on pure number crunching, but on gauging a person’s enthusiasm and demonstrated expertise in a field. Understanding how that translates to a much broader audience is compelling. To me, this is a good thing, as it shows that even one of the world’s most successful corporations, in this case Microsoft, perhaps in one of the few acts of selflessness that one could expect from a multi-national corporation, finds customers who have made a mark in information technology and celebrates that. I become hopeful that they recognize the countless others who make contributions on a day-to-day basis. With half a dozen certifications under my belt, and nearly a decade and a half of experience, I am but one person. And for every Microsoft MVP I have met, their dialogue always translated into real energy and enthusiasm. How many countless others have not received an award, or merit, for helping someone “fix their box”? I suspect that number is in the millions. This in no way belittles the award, because to me, such an award really is about helping others.

Often times helping others is giving someone your opinion: even if your opinion runs contrary to running a system consisting purely of Microsoft software. One example is Windows Live: I have a fundamental disagreement about how I chose to use Windows Live, and whether or not I want Windows Live Services embedded into my operating system experience: something that home users with Microsoft-connected accounts will notice almost immediately upon starting the OS. I do not, in any way, undervalue the development of these services, or their potential market value to consumers. I simply have a difference of opinion. And this should no way diminish someone’s ability to receive an award. I am not an employee or pitch man for Microsoft products, but someone who conveys his own thoughts and expertise in that area. To me, the award would have little value if I was expected to tout the benefits of using Microsoft Security Essentials over a paid anti-malware suite. I think that even the developers of the software themselves would take exception to misinformation. And to Microsoft’s credit, they have asked me nothing of the sort. To me, that is a fundamental sign of an award that encourages community participation and expertise in a given area of technology, from a company that is now expected to set standards on the world stage.

Not everyone made it to this summit: For many of them Redmond, WA is far, far away. For me, living in New York, that also rings true. But it sure are the people who make it worthwhile – even when you’ve never met them in person, the way they behave and conduct themselves, towards you, speaks volumes. And so I’ve learned a lot from every Microsoft MVP that I have met – both online and off; in a five minute conversation, or a fifteen hundred word e-mail.

During the Microsoft MVP Global Summit in the Seattle-Bellevue-Redmond area, I had the opportunity to meet some of the most interesting and eclectic groups of people in information technology that I’ve encountered in years. Truly, the revolution taking place around technology in Seattle, and its famous campus grounds located at 1 Microsoft Way in Redmond, is in no way limited to laboratories that are seldom, if ever, open to the public. Quite to the contrary, acclimating with Microsoft’s extensive community of worldwide supporters and individual contributors doesn’t just result in hearing success story after success story (although that is fun too). Of the thousands of people invited to the event, from all over the world, including Japan, Asia, Indochina, North America, Brazil, and the world at large, I found myself welcomed by a remarkable group of individuals. These men and women were of no traditional demographic one would think of – in fact quite the opposite was at hand. At 29 years old, I met kids younger and more successful than myself, who had generated their own start-up firms. I also met much older men and women, who witnessed the transformative nature of technology and got involved, one way or the other. These men and women came from all walks of life, but I am reminded, in particular, of a few of them I met who had a real impact on me. As someone who had come so far to be a part of this event, I did feel uneasy knowing that I was there alone. The individuals I met at the summit were polite, courteous, helpful, and informative. It was not difficult to see why they are considered experts in their field.

Whether the issue for them was something simple, like MP3 players like Zune, the Xbox, MS SQL, or the Microsoft Windows family of client and server products, this entire network of community supporters really outlined why Microsoft continues to have far-reaching success around the world. The level of enthusiasm for their technologies is clear, concise, and breaks down the traditional barriers of race, color, nationality, and gender inequality.

At that summit, I was witnessing not just what technology would be capable of doing in the future, but as a first timer, I got to see with my own eyes what it had done for just about every participant I was able to strike up a conversation with. Having been severely jet-lagged and exhausted from my trip, I travelled all the way from New York City to Seattle-Tacoma airport in a few hours. Having travelled, for the first time, outside of my own time zone, suspended at 38,000 feet in the air, I found myself dizzy, drowsy, and often times downright sick once I got off the airplane. It was something really unfamiliar to me, but in a way, strange thoughts began to fill my head. I realized that in Seattle, it nearly almost always rains once per day. There is certainly less sunlight there than in New York. Perhaps this lack of sunlight had inadvertently made people more likely to turn on a computer and create some kind of innovative programming. It was a silly thought, but staring at the horizon in the distance, I could not help but think about Mount Rainier, Lake Washington, and the land I was now interconnected with. In many cases a landmark home to science fiction, Seattle’s own Space Needle is a national treasure. A marvel of all aerodynamic ingenuity west of the Mississippi River valley, the Space Needle is essentially a giant UFO-shaped tower that is capable of housing restaurants, sight-seeing tours, and shines a giant beam of light that was part of the original design, but was only recently added.

Perhaps, I thought to myself, this is how the term “cloud computing” had caught on. With a lack of major sunlight ever permeating this area, to my knowledge, and with rain and humidity always on the horizon in a constant lake effect, it suddenly made sense to me how the area had become famous for its murky alternative rock grunge music in the 1990’s, the Jimi Hendrix Experience, the evangelical computer programmers, and a number of activities, like concerts and music performances, that are usually held in-doors! In a way, it all made sense to me now, and I spent a great majority of my time taking in the sights, sounds, and hospitality of an entirely different area of the country. The most populous city in the northern United States is also home and origin to Starbucks. It all began to make sense to me that it would be here, more than anywhere else in the USA, that they would need fresh coffee beans from Jamaica available at a moment’s notice. And as humorous and sophomoric as that may read, I still think there is some truth to this.

This summit was my first experience with my Microsoft MVP award for Windows IT Expert – Consumer on the road. It was certainly a bumpy ride, and I did not take advantage of all of the event activities I could have. Windows product group experts and Microsoft employees were available, nearly from the break of dawn to the dark hours of night, to provide on and off-campus sessions to enthusiastic individuals. Looking back, the path was worthwhile. While most of the people I met had embedded themselves in this event for many years, I was certainly a newcomer. Determined to act the part, I tried my best to overcome the massive jetlag I had encountered, and vowed to myself to never eat sushi after getting off of a six hour flight again. Who could not be anxious when arriving in such a foreign place compared to the east coast of the USA? I have certainly flown and driven up and down that area most of my life, visiting nearly all of the north and south, but I had no idea what to expect near Redmond. An acquaintance of mine from Los Angeles was able to help me deal with the insomnia and time difference that comes with this type of travel, and she probably helped me in a way that she still doesn’t know – all from a few text messages. I am constantly reminded that technology itself has made us all interconnected, no matter where we are. At the Microsoft MVP Global Summit, what I did find were individuals, many of whom who had a certain selflessness about them, and a desire, above all things, to learn more, experience more, and help even more.

Upon immediately striking up a conversation with anyone at the event, it was absolutely easy to see how these men and women achieved recognition of excellence from Microsoft. While many young people who attended the event had created innovative ways to help others by setting up websites or studying the inner-workings of the Microsoft entertainment platform, others had been part of the commercial information technology circles and big businesses that have changed the environment of the Internet. I even caught a glimpse of two individuals who appeared to be working for a former web host that one of my websites was hosted on. These businesses, powered by ingenious individuals, have swept the Internet. And while many people appeared to be there as part of a corporately backed package, it was clear to me that most others had made a name for themselves by creating their own platform for innovation and success. Most important, and pronounced to me, was that each and every person there reached that point through acts of selflessness -- for helping others. In each and every instance, you could go around the area and know that you were surrounded by people who could speak your language: whether that be ASPX, XML, C, PHP, JavaScript, or BBCode. While a person there from Asia may not have had any comprehension of what I was talking about if he did not speak English, if I showed him Process Monitor in Windows, I could probably communicate with him on some technical level.

To contrast that, I came home to an environment back in New York where the Windows 8 Consumer Preview had just been released. It was no surprise to me that Windows 8 had been getting some slag for replacing the Windows Start Orb and Start Menu with the Metro User Interface (Metro UI). Windows 8 still has some major feature improvements going for it. This early in the game, there is no question that many of these features have likely gone undocumented, exist under-the-hood, or simply have not reached a stage in development that was acceptable for the Consumer Preview. First, it is important to note that the Consumer Preview is as much of a beta release for public testing as it is a marketing tool for Microsoft. When we examine how this has been released to the public, it is not hard for me to conclude that it is also a way to gauge public reaction to the first serious and inherent differences to the way the Microsoft Windows GUI has been presented – ever. Other operating system releases have taken the idea of the Start Menu and added search capabilities and refined a core concept. Slowly, but surely, we see an improvement that has occurred over time, with the look and feel of Windows remaining consistent over the ages.

The Consumer Preview Was Released To Test Your Reaction; Not Just The OS

In fact, this is a public release of Microsoft Windows to appear in limelight, in what is essentially a beta (and presumably near release candidate stage), with some features either completely omitted or broken. But not all is lost for Windows 8. There are some under-the-hood changes that show promise. I am not a Windows developer or programmer (most of my tinkering involves Linux, C, HTML, PHP, and JavaScript), but I can start to appreciate the level of changes that are being made on a core level as I get more time to become acquainted with this system and allow various whitepapers and documents to enter my lexicon.

Those looking to upgrade, or who will receive the upgrade already as part of a plan, like Microsoft VLK Software Assurance, will reap some benefits by making the upgrade to Windows 8. Businesses that have been around long enough will be familiar with creating and following a comprehensive End of Life (EOL) cycle plan. Such plans are usually coordinated between an enterprise administrative team that manages the day-to-day changes of internal certificate authorities, domain controllers, and mail servers. This group usually (and hopefully) has the training and forethought necessary to look at the official Microsoft release timetable, as well as the support for commonly used hardware and software. Assessments can be made to better understand how, where, when, and why this software and hardware is deployed, and under what conditions it is upgraded or phased out entirely. Not only does this level of planning bring clarity to what could otherwise become a source of enormous administrative overhead, but it also helps to mitigate the risk associated with allowing systems to continue running under-the-radar and without proper security auditing. Under such a scenario, businesses may choose to have their internal IT department perform network-wide audits of all systems. It is an affordable alternative to bringing in an outside specialist, and comparisons with Microsoft’s official support timetable can help make the transition to new hardware and software – as well as what comes with that -- such as training and significant infrastructure investment -- a more conceivable possibility.

Home users can depend on a much more simple approach, and that is to monitor requirements needed for tasks like school, work, and entertainment, while keeping up-to-date with Microsoft’s in-band and out-of-band security patches. As mentioned previously, Microsoft already publishes a roadmap to indicate when mainstream support, and even updates, will be terminated for their operating systems. Combining all of these ideas together, it is not unreasonable to come to a conclusion that one can continue using Windows 7 for a few more years without much difficulty. When the time comes, an upgrade will be made easy, as the large system manufacturers and independent system builders will, no doubt, bundle OEM copies of the system after RTM (“release-to-manufacturer”). On the side, one could begin to upgrade a small office or a home network with new computers when the need arises, in order to take advantage of the new feature set that is sure to be setting a precedent going forward.

Very large enterprise networks usually already make use of proprietary, custom software and hardware. Those businesses can begin the transition planning in phases, and will have access to fully licensed Microsoft support personnel who work in the corporate sales division of the company. Those resources can be accessed by standard enterprises (approx. 200 clients systems) and by mid-range offices (approx. 50-200 client systems) using Microsoft Gold Certified Partner program members that also specialize in employee training, resource management, and all-inclusive maintenance plans. Even a few well-trained and certified IT consultants and managers could handle a migration and post-migration scenario with the right level of planning and funding.

Stay positive, here is some deductive reasoning as to why not all is lost, and how the feature improvements that Windows 8 customers will benefit from may actually start to appear after the OS hits store shelves. (The kind of stuff that may not be readily apparent in the incomplete Consumer Preview version):

Virtualization Scores A Win

Hyper-V Virtualization included in Windows 8 will allow you to take your computing experience to the next level. If you are not entirely enticed by the prospect of running Windows 8, or still have a co-dependent relationship with legacy applications, Hyper-V will be sure to help you in that area; much like Microsoft Virtual PC brought Windows XP onto the desktop for many Windows 7 users. While Hyper-V isn’t about to take the throne away from VMWare’s line of virtualization products just yet, especially Workstation and ThinApp, expect to see the inclusion of Hyper-V as an experience that has the potential to compartmentalize the installation of applications – even really old ones. With Hyper-V and Metro as platforms likely to be directly controllable and manageable from Windows Server 8, IT admins can rejoice at the concept of virtualizing what is left of the desktop – and preventing inappropriate use of computer system resources at work. With full control of Metro and Hyper-V under Active Directory, system management is about to get a whole lot easier. Windows 8 fits as the one OS that office managers can control directly from Windows Server 8 without remorse. Limiting access to the desktop will reduce headaches for employees who may only be obligated to launch specific company-approved Metro apps.

Metro: The User Interface Revolution
Metro UI will not be alien to anyone who is old enough to remember Microsoft Encarta, or to any youngster who has already owned a Windows Phone. I still remember using Microsoft Encarta’s slick navigation system to look up John F. Kennedy’s 1961 inaugural address. This was one of the first times I saw decent video footage in an encyclopedia. Back in those days, everyone was on dial-up, and an encyclopedia like Encarta was the be-all and end-all of factoid finding for non-academics and kids still in grade school. So expect Metro-powered applications, programmed in C++, C#, HTML, JavaScript, and even VisualBasic. This programming platform, dubbed, Windows Runtime or WindowsRT for short, is object-oriented and just getting started. With enough knowledge of HTML and JavaScript, many people out there with limited knowledge of C++ could create some pretty snazzy object-oriented apps that make use of jQuery and YUI hosted over the web. With the launch of the Windows App Store, don’t be surprised to see some amazing third party apps put long-time industry staples to shame. Once you start looking into the development platform for Metro, then you start to realize that it isn’t just a gimmick for touch screen users. Ostensibly, a great deal of time developing the .NET Framework is about to pay off, in bundles, for everyone who starts using Metro.

Gamers Not Doomed; HID Development Pushed Forward by Windows 8 OS
Gamers likely won’t be left out of the picture. Metro apps are designed to run in full screen, and as all hardcore gamers know, most high intensity games actually throw you into full-screen mode any way. The difference is likely to be negligible, but who wouldn’t like a concise way to manage all entertainment software and keep it running in the background every once in a while? Single player games that enter the market as instant classics like TES: Skyrim could suddenly appear more interactive in the future. Don’t be surprised to see some form of Windows 8 incorporated into the next version of Xbox (Xbox 720?) with DirectX 11 support. It would be nice to see cross-compatibility with the Xbox and Windows PC. Imagine if you could run any console game on a PC and vice versa: Now that kind of unification would prevent a lot of people from buying all those Media Center extenders and going wild on home entertainment systems. Only time will tell how far Microsoft will take us down the rabbit hole. For gamers, that is a great thing.

Multi-monitor and multi-touch support will bring Windows 8 to tablets and phones like never before with certified Metro applications that are programmed for Windows Runtime (WindowsRT). Expect a lot to happen in how we use our desktop and laptop systems. While major advancements in human interface devices are years away, it appears to be one of the major cornerstones of IBM Research and Microsoft Research. Unification across platforms is a recipe for redundancy, but in the case of sensitive data, redundancy is a very good thing. We want to be able to access our office files from home and our home files from the office, without necessarily having to do cartwheels with third party software. The integration of SkyDrive, and ultimately, shell extensions for third-party apps like Dropbox, is a given. Microsoft is never going to take over the cloud-hosted backup market, but they could pull off a pretty neat way of sharing, updating, and collaborating on projects between tablets, phones, desktops, laptops, game consoles, and more. Kinect for Windows is going to be scoffed at in the beginning, but once everyone has such a device linked up to their monitor, moving your hand around to change the active Window on your computer isn’t going to be that bad of a trade-off. In 2009, I gave a speech to a number of people in the public sector about what I saw as the cornerstone for future technology. That presentation included the fact that a device like the SmartBoard would be obsolete within five years’ time, due to the decreasing price of touch screen computers, and the ability for computing devices to detect human movement. While it didn’t go over well with the locals, it is happening, right now. That is something to be excited about. Whatever touch screen advancements Microsoft introduces with Windows 8 will once again push the hardware market to accommodate the software. This means all sorts of new human interface devices are already in development, even from third parties (see: Google Goggles/Google Glasses as one superlative example).

A New World for Software and Hardware Development

It’s not just a Microsoft world: Software companies, game studios, and all sorts of IT companies depend on the reliability and performance of Microsoft products and services, even when their customers aren’t in Microsoft Windows. This happens whenever an e-mail passes through an Exchange server, or a large database is designed for interoperability between a metadata retrieval system and Microsoft Access. Companies that specialize in document management, database administration, and even brand marketing will reap massive benefit from an interface that contains a display mechanism that has the potential to plot and chart raw data into something visually understandable. For example, if I tell you we ordered a hundred pizzas, each consisting of eight slices, and we only have 10 minutes to finish 25 slices, you’re going to wonder how many pizzas we have left. Once data entry software, even stuff that was initially designed with a Mac in mind, is designed for Metro, we’re not just going to be able to see how many pizza slices we have left – we may have the option to order some extras, or watch other people eat the ones left in 10 minutes. It’s that kind of world we’re delving into. We don’t see how great Metro can be: Only because software companies known for their great innovative capabilities like Google and Apple are just getting started on WindowsRT and Metro. This stuff is not going away, and when all the great innovator’s in the world get involved, we’re going to see sparks fly off the third rail.

Negativity Bias
Many people who try the Windows Consumer Preview may be inexperienced with running beta software. And when your whole operating system is a big chunk of bugs, in many cases unbranded, and in some cases feature incomplete, there is going to be a heck of a lot to complain about. I admit that I’m one of them. Take a look at my post about Windows 8 being a platform to sell Windows Live connected services. Well, of course that is what Windows 8 is, but it has the potential to be much more. Studies show us that, on average, people tend to remember a negative outcome 2.5x more than they do a good one. That means you’re 2.5 times more likely to remember when you got a bad haircut then when you got a good one. You’re 2.5 times more likely to dwell on the day you lost your job, than you are to remember the years you spent at the very same job when you contributed an enormous amount of productivity to the company’s bottom line. You’re 2.5 times more likely to remember that turbulence on the airplane. It was unbearable for ten minutes, and now you’re 2.5 times less likely to remember the time you struck up a great conversation with someone on that long flight. You’re 2.5x more likely to remember that woman or man who rejected you on that first date then you are to remember the laughs you shared going into the restaurant. This negativity bias is something we usually learn about in the first or second year of undergraduate psychology, but very few of us even remember or know what it is. In general, your body is trained to remember when bad things happened more than good things, and actually dwell on it. It is truly a response from the Stone Age, and is a very healthy response. It keeps you in balance. But in today’s high tech and demanding world, it can be taken too far.
So yes, we can look at Windows 8 and positively say, “Maybe this thing won’t be so bad. Maybe I can learn it, and enjoy it.”

The True Test: Greater Than The Sum of Its Parts?

Don’t forget that Windows 8 will include a Start on Demand model for all system-related services. For years, I found myself sending Windows XP, Windows Vista, and Windows 7 customers to a web page called Black Viper (BlackViper.com). This site contained detailed guides on how to configure your Windows operating system to use as few services as absolutely necessary. That site became especially popular during the Windows Vista release. Essentially, the site goes through every single service running on your system and will tell you, not only what the default start setting is for it, but how best to optimize it to suit your needs. If you were trying to squeeze every last drop of performance out of the operating system, without much care for its ability to perform certain operations, you could always use BlackViper’s “Service Configurations” lists to decide whether or not it was safe to make sure that something like the Distributed Link Tracking Client service or the World Wide Web Publishing Service could be completely disabled or not. If I haven’t lost you on this one, Microsoft has come up with a novel solution that is sure to improve your experience with Windows 8, and that is by using “Start on Demand”. Under Start on Demand, when Windows 8 needs a service, it launches it – only when. So that, in and of itself, will save resources. And when we look at what is coming up with memory deduplication, we are looking at true advancement in operating system performance at its most basic level.

Yes, the Consumer Preview is flawed, but for all its flaws, let us all think about these things and realize that the best is yet to come for an operating system ahead of its time.


In early January, we were tasked with creating a unique, interactive experience for the SXSW Interactive launch party with Frog Design. We bounced around many ideas, and finally settled on a project that Rick suggested during our first meeting: boxing robots controlled via Kinect.
The theme of the opening party was Retro Gaming, so we figured creating a life size version of a classic tabletop boxing game mashed up with a "Real Steel"-inspired Kinect experience would be a perfect fit. Most importantly, since this was going to be the first big project of the new Coding4Fun team, we wanted to push ourselves to create an experience that needed each of us to bring our unique blend of hardware, software, and interaction magic to the table under an aggressively tight deadline.

The BoxingBots had to be fit a few requirements:

They had to be funThey had to survive for 4 hours, the length of the SXSW opening partyEach robot had to punch for 90 seconds at a time, the length of a roundThey had to be life-sizeThey had to be Kinect-drivableThey had to be built, shipped, and reassembled for SXSW
Creating a robot that could be beaten up for 4 hours and still work proved to be an interesting problem. After doing some research on different configurations and styles, it was decided we should leverage a prior project to get a jump start to meet the deadline. We repurposed sections of our Kinect drivable lounge chair, Jellybean! This was an advantage because it contained many known items, such as the motors, motor controllers, and chassis material. Additionally, it was strong and fast, it was modular, and the code to drive it was already written.
Jellybean would only get us part of the way there, however. We also had to do some retrofitting to get it to work for our new project. The footprint of the base needed to shrink from 32x50 inches to 32x35 inches, while still allowing space to contain all of the original batteries, wheels, motors, motor controllers, switches, voltage adapters. We also had to change how the motors were mounted with this new layout, as well as provide for a way to easily "hot swap" the batteries out during the event. Finally, we had to mount an upper body section that looked somewhat human, complete with a head and punching arms.

Experimenting with possible layouts
The upper body had its own challenges, as it had to support a ton of equipment, including:

Punching armsPopping headPneumatic valvesAir manifoldAir Tank(s)LaptopPhidget interface boardPhidget relay boardsPhidget LED boardXbox wireless controller PC transmitter / receiverChest plateLEDsSensors to detect a punch

Brian and Rick put together one of the upper frames
Punching and Air Tanks

We had to solve the problem of getting each robot to punch hard enough to register a hit on the opponent bot while not breaking the opponent bot (or itself). Bots also had to withstand a bit of side load in case the arms got tangled or took a side blow. Pneumatic actuators provided us with a lot of flexibility over hydraulics or an electrical solution since they are fast, come in tons of variations, won't break when met with resistance, and can fine tuned with a few onsite adjustments.
To provide power to the actuators, the robots had two 2.5 gallon tanks pressurized to 150psi, with the actuators punching at ~70psi. We could punch for about five 90-second rounds before needing to re-pressurize the tanks. Pressurizing the onboard tanks was taken care of by a pair of off-the-shelf DeWalt air compressors.

The Head

It wouldn’t be a polished game if the head didn’t pop up on the losing bot, so we added another pneumatic actuator to raise and lower the head, and some extra red and blue LEDs. This pneumatic is housed in the chest of the robot and is triggered only when the game has ended.
To create the head, we first prototyped a concept with cardboard and duct tape. A rotated welding mask just happened to provide the shape we were going for on the crown, and we crafted each custom jaw with a laser cutter. We considered using a mold and vacuum forming to create something a bit more custom, but had to scrap the idea due to time constraints.


Our initial implementation for detecting punches failed due to far too many false positives. We thought using IR distance sensors would be a good solution since we could detect a “close” punch and tell the other robot to retract the arm before real contact. The test looked promising, but in practice, when the opposite sensors were close, we saw a lot of noise in the data. The backup and currently implemented solution was to install simple push switches in the chest and detect when those are clicked by the chest plate pressing against them.


Different items required different voltages. The motors and pneumatic valves required 24V, the LEDs required 12V and the USB hub required 5V. We used Castle Pro BEC converters to step down the voltages. These devices are typically used in RC airplanes and helicopters.

So how does someone ship two 700lb robots from Seattle to Austin? We did it in 8 crates. . The key thing to note is that the tops and bottoms of each robot were separated. Any wire that connected the two parts had to be able to be disconnected in some form. This affected the serial cords and the power cords (5V, 12V, 24V).

The software and architecture went through a variety of iterations during development. The final architecture used 3 laptops, 2 desktops, an access point, and a router. It's important to note that the laptops of Robot 1 and Robot 2 are physically mounted on the backs of each Robot body, communicating through WiFi to the Admin console. The entire setup looks like the following diagram:

Admin Console

The heart of the infrastructure is the Admin Console. Originally, this was also intended to be a scoreboard to show audience members the current stats of the match, but as we got further into the project, we realized this wouldn't be necessary. The robots are where the action is, and people's eyes focus there. Additionally, the robots themselves display their current health status via LEDs, so duplicating this information isn't useful. However, the admin side of this app remains.

The admin console is the master controller for the game state and utilizes socket communication between it, the robots, and the user consoles. A generic socket handler was written to span each computer in the setup. The SocketListener object allows for incoming connections to be received, while the SocketClient allows clients to connect to those SocketListeners. These are generic objects, which must specify objects of type GamePacket to send and receive:

public class SocketListener where TSend : GamePacket where TReceive : GamePacket, new()

GamePacket is a base class from which specific packets inherit:

public abstract class GamePacket{ public byte[] ToByteArray() { MemoryStream ms = new MemoryStream(); BinaryWriter bw = new BinaryWriter(ms); try { WritePacket(bw); } catch(IOException ex) { Debug.WriteLine("Error writing packet: " + ex); } return ms.ToArray(); } public void FromBinaryReader(BinaryReader br) { try { ReadPacket(br); } catch(IOException ex) { Debug.WriteLine("Error reading packet: " + ex); } } public abstract void WritePacket(BinaryWriter bw); public abstract void ReadPacket(BinaryReader br);}For example, in communication between the robots and the admin console, GameStatePacket and MovementDescriptorPacket are sent and received. Each GamePacket must implement its own ReadPacket and WritePacket methods to serialize itself for sending across the socket.
Packets are sent between machines every "frame". We need the absolute latest game state, robot movement, etc. at all times to ensure the game is functional and responsive.

As is quite obvious, absolutely no effort was put into making the console "pretty". This is never seen by the end users and just needs to be functional. Once the robot software and the user consoles are started, the admin console initiates connections to each of those four machines. Each machine runs the SocketListener side of the socket code, while the Admin console creates four SocketClient objects to connect to each those. Once connected, the admin has control of the game and can start, stop, pause, and reset a match by sending the appropriate packets to everyone that is connected.

The robot UI is also never intended to be seen by an end user, and therefore contains only diagnostic information.

Each robot has a wireless Xbox 360 controller connected to it so it can be manually controlled. The UI above reflects the positions of the controller sticks and buttons. During a match, it's possible for a bot to get outside of our "safe zone". One bot might be pushing the other, or the user may be moving the bot toward the edge of the ring. To counter this, the player's coach can either temporarily move the bot, turning off Kinect input, or force the game into "referee mode" which pauses the entire match and turns off Kinect control on both sides. In either case, the robots can be driven with the controllers and reset to safe positions. Once both coaches signal that the robots are reset, the admin can then resume the match.
Controlling Hardware

Phidget hardware controlled our LEDs, relays, and sensors. Getting data out of a Phidget along with actions, such as opening and closing a relay, is shockingly easy as they have pretty straightforward C# APIs and samples, which is why they typically are our go-to product for projects like this.
Below are some code snippets for the LEDs, relays, and sensor.
LEDs – from LedController.cs
This is the code that actually updates the health LEDs in the robot's chest. The LEDs were put on the board in a certain order to allow this style of iteration. We had a small issue of running out of one color of LEDs so we used some super bright ones and had to reduce the power levels to the non-super bright LEDs to prevent possible damage:

private void UpdateLedsNonSuperBright(int amount, int offset, int brightness){ for (var i = offset; i < amount + offset; i++) { _phidgetLed.leds[i] = brightness / 2; }}private void UpdateLedsSuperBright(int amount, int offset, int brightness){ for (var i = offset; i < amount + offset; i++) { _phidgetLed.leds[i] = brightness; }}
Sensor data – from SensorController.cs
This code snippet shows how we obtain the digital and analog inputs from the Phidget 8/8/8 interface board:

public SensorController(InterfaceKit phidgetInterfaceKit) : base(phidgetInterfaceKit){ PhidgetInterfaceKit.ratiometric = true;}public int PollAnalogInput(int index){ return PhidgetInterfaceKit.sensors[index].Value;}public bool PollDigitalInput(int index){ return PhidgetInterfaceKit.inputs[index];}
Relays – from RelayController.cs
Electrical relays fire our pneumatic valves. These control the head popping and the arms punching. For our application, we wanted the ability to reset the relay automatically. When the relay is opened, an event is triggered and we create an actively polled thread to validate whether we should close the relay. The reason why we actively poll is someone could be quickly toggling the relay. We wouldn't want to close it on accident. The polling and logic does result in a possible delay or early trigger for closing the relay, but for the BoxingBots the difference of 10ms for a relay closing is acceptable:

public void Open(int index, int autoCloseDelay){ UseRelay(index, true, autoCloseDelay);}public void Close(int index){ UseRelay(index, false, 0);}private void UseRelay(int index, bool openRelay, int autoCloseDelay){ AlterTimeDelay(index, autoCloseDelay); PhidgetInterfaceKit.outputs[index] = openRelay;}void _relayController_OutputChange(object sender, OutputChangeEventArgs e){ // closed if (!e.Value) return; ThreadPool.QueueUserWorkItem(state => { if (_timeDelays.ContainsKey(e.Index)) { while (_timeDelays[e.Index] > 0) { Thread.Sleep(ThreadTick); _timeDelays[e.Index] -= ThreadTick; } } Close(e.Index); });}public int GetTimeDelay(int index){ if (!_timeDelays.ContainsKey(index)) return 0; return _timeDelays[index];}public void AlterTimeDelay(int index, int autoCloseDelay){ _timeDelays[index] = autoCloseDelay;}
User Console

Since the theme of the party was Retro Gaming, we wanted to go for an early 80's Sci-fi style interface, complete with starscape background and solar flares! We wanted to create actual interactive elements, though, to maintain the green phosphor look of early monochrome monitors. Unlike traditional video games, however, the screens are designed not as the primary focus of attention, but rather to help calibrate the player before the round and provide secondary display data during the match. The player should primarily stay focused on the boxer during the match, so the interface is designed to sit under the players view line and serve as more of a dashboard during each match.
However, during calibration before each round, it is important to have the player understand how their core body will be used to drive the Robot base during each round. To do this, we needed to track an average of the joints that make up each fighter's body core. We handled the process by creating a list of core joints and a variable that normalizes the metric distances returned from the Kinect sensor into a human-acceptable range of motion:

private static List coreJoints = newList( newJointType[] { JointType.AnkleLeft, JointType.AnkleRight, JointType.ShoulderCenter, JointType.HipCenter });private const double RangeNormalizer = .22;private const double NoiseClip = .05;And then during each skeleton calculation called by the game loop, we average the core positions to determine the averages of the players as they relate to their playable ring boundary:

public staticMovementDescriptorPacket AnalyzeSkeleton(Skeleton skeleton){ // ... CoreAverageDelta.X = 0.0; CoreAverageDelta.Z = 0.0; foreach (JointType jt in CoreJoints) { CoreAverageDelta.X += skeleton.Joints[jt].Position.X - RingCenter.X; CoreAverageDelta.Z += skeleton.Joints[jt].Position.Z - RingCenter.Z; } CoreAverageDelta.X /= CoreJoints.Count * RangeNormalizer; CoreAverageDelta.Z /= CoreJoints.Count * RangeNormalizer; // ... if (CoreAverageDelta.Z > NoiseClip || CoreAverageDelta.Z < -NoiseClip) { packet.Move = -CoreAverageDelta.Z; } if (CoreAverageDelta.X > NoiseClip || CoreAverageDelta.X < -NoiseClip) { packet.Strafe = CoreAverageDelta.X; }}In this way, we filter out insignificant data noise and allow the player's average core body to serve as a joystick for driving the robot around. Allowing them to lean at any angle, the move and strafe values are accordingly set to allow for a full 360 degrees of movement freedom, while at the same time not allowing any one joint to unevenly influence their direction of motion.
Another snippet of code that may be of interest is the WPF3D rendering we used to visualize the skeleton. Since the Kinect returns joint data based off of a center point, it is relatively easy to wire up a working 3D model in WPF3D off of the skeleton data, and we do this in the ringAvatar.xaml control.
In the XAML, we simply need a basic Viewport3D with camera, lights, and an empty ModelVisual3D container to hold or squares. The empty container looks like this:

In the code, we created a generic WPF3DModel that inherits from UIElement3D and is used to store the basic positioning properties of each square. In the constructor of the object, though, we can pass a reference key to a XAML file that defines the 3D mesh to use:

public WPF3DModel(string resourceKey){ this.Visual3DModel = Application.Current.Resources[resourceKey] as Model3DGroup;}This is a handy trick when you need to do a fast WPF3D demo and require a certain level of flexibility. To create a 3D cube for each joint when ringAvatar is initialized, we simply do this:

private readonly List _models = new List();private void CreateViewportModels(){ for (int i = 0; i < 20; i++) { WPF3DModel model = new WPF3DModel("mesh_cube"); viewportModelsContainer2.Children.Add(model); // ... _models.Add(model); } // ...}And then each time we need to redraw the skeleton, we loop through the skeleton data and set the cube position like so:

if (SkeletonProcessor.RawSkeleton.TrackingState == SkeletonTrackingState.Tracked){ int i = 0; foreach (Joint joint in SkeletonProcessor.RawSkeleton.Joints) { if (joint.TrackingState == JointTrackingState.Tracked) { _models[i].Translate( joint.Position.X * 8.0, joint.Position.Y * 10.0, joint.Position.Z * -10.0); i++; } } // ...}There are a few other areas in the User Console that you may want to further dig into, including the weighting for handling a punch as well dynamically generating arcs based on the position of the fist to the shoulder. However, for this experience, the User Console serves as a secondary display to support the playing experience and gives both the player and audience a visual anchor for the game.
Making a 700lb Tank Drive like a First Person Shooter

The character in a first person shooter (FPS) video game has an X position, a Y position, and a rotation vector. On an Xbox controller, the left stick controls the X,Y position. Y is the throttle (forward and backward), X is the strafing amount (left and right), and the right thumb stick moves the camera to change what you're looking at (rotation). When all three are combined, the character can do things such as run around someone while looking at them.
In the prior project, we had existing code that worked for controlling all 4 motors at the same time, working much like a tank does, so we only had throttle (forward and back) and strafing (left and right). Accordingly, we can move the motors in all directions, but there are still scenarios in which the wheels fight one another and the base won't move. By moving to a FPS style, we eliminate the ability to move the wheels in an non-productive way and actually make it a lot easier to drive.
Note that Clint had some wiring "quirks" with polarity and which motor was left VS right, he had to correct in these quirks in software :

public Speed CalculateSpeed(double throttleVector, double strafeVector, double rotationAngle){ rotationAngle = VerifyLegalValues(rotationAngle); rotationAngle = AdjustValueForDeadzone(rotationAngle, AllowedRotationAngle, _negatedAllowedRotationAngle); // flipped wiring, easy fix is here throttleVector *= -1; rotationAngle *= -1; // miss wired, had to flip throttle and straff for calc return CalculateSpeed(strafeVector + rotationAngle, throttleVector, strafeVector - rotationAngle, throttleVector);}protected Speed CalculateSpeed(double leftSideThrottle, double leftSideVectorMultiplier, double rightSideThrottle, double rightSideVectorMultiplier) { /* code from Jellybean */ }Conclusion

The Boxing Bots project was one of the biggest things we have built to date. It was also one of our most successful projects. Though it was a rainy, cold day and night in Austin when the bots were revealed, and we had to move locations several times during setup to ensure the bots and computers wouldn't be fried by the rain, they ran flawlessly for the entire event and contestants seemed to have a lot of fun driving them.


For //build/ 2012, we wanted to showcase what Windows 8 can offer developers. There are a lot of projects showing off great things like contracts and Live Tiles, but we wanted to show off some of the lesser known features. This project focuses on one of those: stereoscopic 3D with DirectX 11.1.
Prior to DirectX 11.1, stereoscopic 3D required specific hardware and a custom API written for that hardware. With DX11.1, stereoscopic 3D has been "democratized." Any GPU that supports DirectX 11.1 can be connected to any device which supports stereoscopic 3D, be it a projector, an LCD TV, or anything else. Plug it in and DirectX does the rest.
From the software side of things, any DX11.1 application can determine if the connected display supports stereoscopic 3D, and choose to render itself separately for the player's left and right eye.
To showcase this feature, we decided to build a very simple game that would give the illusion of depth, but be easy to explain and play. What's easier than Pong? So, we built the world's most over-engineered game of 3D Pong named Maelstrom.


Each player setup consists of two applications: the DirectX 11.1 game written in C++, and the touch-screen controller written in C#/XAML. Both are Windows Store applications. Since this is a two player game, there are two instances of each application running, one per player. All for applications are networked together using StreamSockets from the Windows Runtime. The two controllers and player two's DirectX game connect to player one's DirectX game, which acts as the "master". Controller input is sent here, and, once the ball and paddle positions are calculated, the data is drawn for player one and sent to player two which draws the world from the other player's perspective.

Direct3D Application

Getting Started with stereoscopic DirectX11, C++, and XAML

If you have never worked with DirectX before, it can be a little overwhelming at first. And even if you have worked with it some in the past, targeting the new Windows 8 ecosystem, along with C++ and XAML have added some additional changes in how you may have designed your solution previously.
Fortunately, the Windows Dev Center for Windows Store Apps has some great samples to get you started, and we took full advantage of them to get to speed. For a great, simple example of how to leverage the new stereoscopic feature in Direct3D 11.1, we started with Direct3D Stereoscopic Sample which shows the basic adjustments to the Render loop for toggling your virtual cameras. However, to see a great example of a simple game structure that also leverages stereoscopic rendering where available, the tutorial found at Walkthrough: a simple Windows Store game with DirectX is invaluable. Further in this article, we will dive deeper into the specifics of stereoscopic rendering in our game.
One thing to note, if you follow the link in the above Walkthrough to the original project, it will take you to a C++ only implementation of the game. Now, of course, all the DirectX game objects such as the paddle, puck and walls are all rendered using D3D. However, for HUD (Heads up Display) elements, this C++ only sample also uses DirectX exclusively. If you are coming from a managed code background, this will definitely seem like unnecessary overhead. That is because this C++ only sample was created after last year's BUILD conference in 2011 and C++ and DirectX still did not play well with XAML.
However, a few months later, the ability to nest DirectX content in a XAML project became available for true hybrid style solutions (see the article DirectX and XAML interop - Windows Store apps using C++ and DirectX for more information). After this feature was added, the simple Shooter Game referenced above had its HUD logic rewritten in XAML and posted up to Dev Center as XAML DirectX 3D shooting game sample, which shows both stereoscopic support, a simple Game Engine structure in C++ and XAML integration. At this point, we had all the starter code we needed to start writing our own game.
Game Engine

We modified the base sample to accommodate our needs. We created specific GameObjects, such as Paddle, Puck, etc. to add the behaviors we needed. We also added an Update and Render method to the base GameObject so that, for every frame, we could do any calculations required, and then draw the object to the screen. This is very similar to how XNA sets up its game engine.
Game Constants

Because we were tweaking a variety of values like colors, sizes, camera locations, etc., we created a GameConstants.h header file which contains nothing but these types of values in a single location. This made it very easy for us to quickly try out various tweaks and see the results on the next run. Using namespaces helped keep the code a bit more manageable here as well. Here’s a quick snippet of that file:

namespace GameConstants{ // bounds of the arena static const DirectX::XMFLOAT3 MinBound = DirectX::XMFLOAT3( 0.0f, 0.0f, 0.0f); static const DirectX::XMFLOAT3 MaxBound = DirectX::XMFLOAT3(19.0f, 10.0f, 90.0f); // game camera "look at" points static const DirectX::XMFLOAT3 LookAtP1 = DirectX::XMFLOAT3(9.5f, 5.0f, 90.0f); static const DirectX::XMFLOAT3 LookAtP2 = DirectX::XMFLOAT3(9.5f, 5.0f, 0.0f); // Waiting Room camera positions static const DirectX::XMFLOAT3 WaitingEyeP1 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MaxBound.z - 12.0f); static const DirectX::XMFLOAT3 WaitingEyeP2 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z + 12.0f); static const DirectX::XMFLOAT3 WaitingEyeMjpegStation = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z + 9.6f); // game camera eye position static const DirectX::XMFLOAT3 EyeP1 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z - 6.0f); static const DirectX::XMFLOAT3 EyeP2 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MaxBound.z + 6.0f); static const float Paddle2Position = MaxBound.z - 5.0f; namespace PaddlePower { // power level to light paddle at maximum color static const float Max = 9.0f; // max paddle power color...each component will be multiplied by power factor static const DirectX::XMFLOAT4 Color = DirectX::XMFLOAT4(0.2f, 0.4f, 0.7f, 0.5f); // factor to multiply mesh percentage based on power static const float MeshPercent = 1.2f; }; // time to cycle powerups namespace Powerup { namespace Split { static const float Time = 10.0f; static const float NumTiles = 4; static const DirectX::XMFLOAT4 TileColor = DirectX::XMFLOAT4(0.1f, 0.4f, 1.0f, 1.0f); static const float TileFadeUp = 0.20f; static const float TileDuration = 2.10f; static const float TileFadeDown = 0.20f; static const float TileMeshPercent = 2.0f; static const float TileDiffusePercent = 2.0f; }; };}Stereoscopic 3D

Direct3D must be initialized properly to support stereoscopic displays. When the swap chains are created, an additional render target is required, such that one render target is for the left eye, and one render target is for the right eye. Direct3D will let you know if a stereoscopic display is available, so you can create the swap chain and render targets appropriately.
With those in place, it’s simply a matter of rendering your scene twice, once per eye…that is, once per render target.
For our game this was very simple. Our in-game camera contains two projection matrices, one representing the view from the left eye, and one from the right eye. These are calculated when the projection parameters are set.

void Camera::SetProjParams( _In_ float fieldOfView, _In_ float aspectRatio, _In_ float nearPlane, _In_ float farPlane ){ // Set attributes for the projection matrix. m_fieldOfView = fieldOfView; m_aspectRatio = aspectRatio; m_nearPlane = nearPlane; m_farPlane = farPlane; XMStoreFloat4x4( &m_projectionMatrix, XMMatrixPerspectiveFovLH( m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane ) ); STEREO_PARAMETERS* stereoParams = nullptr; // Update the projection matrix. XMStoreFloat4x4( &m_projectionMatrixLeft, MatrixStereoProjectionFovLH( stereoParams, STEREO_CHANNEL::LEFT, m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane, STEREO_MODE::NORMAL ) ); XMStoreFloat4x4( &m_projectionMatrixRight, MatrixStereoProjectionFovLH( stereoParams, STEREO_CHANNEL::RIGHT, m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane, STEREO_MODE::NORMAL ) );}Depending on which eye we are rendering, we grab the appropriate projection matrix and pass it down to the vertex shader, so the final scene is rendered offset for the proper eye.
Collision Detection

If you are just starting to move into 3D modeling and programming, one of the trickier aspects of your game can be collision detection and response. Maelstrom uses primitives for all of the game elements, so our collision code was able to be a bit more straightforward compared to complex mesh collisions, but understanding a few core math concepts is still critical to grasp what the code is doing.
Fortunately, DirectX provides us with an DirectX Math Library that is able to do the serious heavy lifting, so the main complexity comes from framing the problem and learning how to apply the library.
For example, In our situation we had up to three very fast moving spheres and needed to check for wall collisions and then handle to appropriate bounce, since some of the walls would also be angled. In a 2D game, a collision detection between a sphere and an axis line is very easy. If the distance between a circle and the line is less than or equal to the radius of the sphere, they are touching. On every frame, you move your circle based on its velocity and do your collision test again. But even here, your solution may not be that easy for two reasons.
First, what if the line is angled and not lying flat on the X or Y axis? You have to find the point on the line based on the line's angle that is closest to the sphere to do your distance calculations. And if you then want it to bounce, you have to rotate the velocity of the circle by the line's angle, calculate your bounce, and then rotate back. And that's just rotated walls in 2D. When you move up to 3D, you have to take into account the surface normal (which way the 3D plane is facing) in your calculations.
The second complexity that we needed to account for and which pops up in either 2D or 3D collision detection is travel between frames. In other words, if your ball is travelling very fast, it may have completely travelled through your collision boundary in between frames and you wouldn't notice it if you are only doing a distance / overlap check as outlined above. In our case, the pucks had the ability of travelling very fast with a speed boost, so we needed a more robust solution. Therefore, instead of implementing a simple sphere plane intersection test, we needed to create a line of motion that represented where the ball ended on the previous frame and where it currently is after it's new velocity is added to it's position. That line then needs to first be tested to see if it crosses a WallTile. If it does cross, then we know an collision has occurred between frames. We then solve for the time (t) between frames the Sphere would have first made contact to know the exact point of impact and calculate the appropriate "bounce off" direction.
The final code for a puck (or moving sphere) and wallTile collision test looks like this:

bool GameEngine::CheckWallCollision(Puck^ puck){ bool isIntersect = false; bool wallCollision = false; for(unsigned int i = 0; i < m_environmentCollisionWalls.size(); i++) { WallTile^ wall = m_environmentCollisionWalls[i]; float radius = puck->Radius(); float signedRadius = puck->Radius(); float contactTime = 0.0f; XMVECTOR contactPlanePoint = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR contactPuckPosition = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); bool intersectsPlane = false; // Determine the velocity of this tick by subtracting the previous position from the proposed current position. // in the previous update() cycle, puck->Position() = puck->OldPosition() + ( puck->velocity * timerDelta ). // Therefore, this calculated velocity for the current frame movement differs from the stored velocity // since the stored velocity is independent of each game tick's timerDelta. XMVECTOR puckVectorVelocity = puck->VectorPosition() - puck->OldVectorPosition(); float D = XMVectorGetX( XMVector3Dot( wall->VectorNormal(), wall->VectorPosition() ) ); // Determine the distance of the puck to the plane of the wall. float dist = XMVectorGetX( XMVector3Dot(wall->VectorNormal(), puck->OldVectorPosition() )) - D; signedRadius = dist > 0 ? radius : -radius; // if the distance of the puck to the plane is already less than the radius, the oldPosition() was intersecting already if ( fabs(dist) < radius ) { // The sphere is touching the plane. intersectsPlane = true; contactTime = 0.0f; contactPuckPosition = puck->OldVectorPosition(); contactPlanePoint = puck->OldVectorPosition() + wall->VectorNormal()*XMVectorSet(signedRadius,signedRadius,signedRadius,1.0f); } else { // See if the time it would take to cross the plane from the oldPosition() with the current velocity falls within this game tick. // puckVelocityNormal is the amount of force from the velocity exerted directly toward the plane. float puckVelocityNormal = XMVectorGetX(XMVector3Dot(wall->VectorNormal(), puckVectorVelocity )); // if the puckvVelocityNormal times the distance is less than zero, a plane intersection will occur if ( puckVelocityNormal * dist < 0.0f ) { // determine the contactTime, taking into account the shell of the sphere ( position() + radius ) // is what will make contact, not the position alone. contactTime = (signedRadius - dist) / puckVelocityNormal; // if the contact time is bewteen zero and one, the intersection has occured bewteen oldPosition() and position() if ( contactTime > 0.0f && contactTime < 1.0f ) { intersectsPlane = true; // this is the position of the puck when its shell makes contact on the plane contactPuckPosition = puck->OldVectorPosition() + XMVectorScale(puckVectorVelocity, contactTime); // this is the position on the plane where the shell touches. contactPlanePoint = contactPuckPosition - XMVectorScale(wall->VectorNormal(), signedRadius); } } } // If the puck has contacted the wall plane, determine if the point of contact falls within the wall boundary for true contact. if (intersectsPlane) { float Kr = 1.0f; // Kr is the coefficient of restitution. At 1.0, we have a totally elastic bounce with no dampening. At Kr = 0.0, the ball would stop at the wall. // Make sure the puck velocity and wall normal are facing each other float impact = XMVectorGetX ( XMVector3Dot ( wall->VectorNormal(), puck->VectorVelocity()) ); if (impact < 0.0f) { wallCollision = true; //// bounce the vector off the plane XMVECTOR VectorNormal = XMVector3Dot(wall->VectorNormal(), puck->VectorVelocity())*wall->VectorNormal(); XMVECTOR VectorTangent = puck->VectorVelocity() - VectorNormal; puck->Velocity(VectorTangent - (XMVectorScale(VectorNormal, Kr))); puck->Position(contactPuckPosition); int segment = (int)(puck->Position().z / GameConstants::WallSegmentDepth); segment = max(min(segment, GameConstants::NumWallSegments-1), 0); auto tiles = m_wallTiles[segment]; WallTile^ tile = tiles[i]; if(tile->GetPowerup() == Powerup::Split) SplitPucks(); break; } } } return wallCollision;}Drawing Maelstrom

To draw the game, we wanted to use some advanced techniques. We decided to go with a light pre-pass deferred rendering pipeline with normal mapping. That’s a lot of jargon but it isn’t all that complicated once you know what the jargon means, so let’s break it down.
When you draw something in 3D, there are three things that come together to determine the final color of each pixel on the screen: meshes, materials, and lights. A mesh is a collection of triangles that make up a game object (such as a wall tile in Maelstrom). On its own, a mesh is just a bunch of dots and lines. A material makes a mesh look like something. It could be as simple as a solid color but usually it’s a texture and sometimes it’s more (the wall tiles in Maelstrom use both a texture and a normal map to define their material properties). Lastly, lights transform materials by determining how bright they should appear and what sort of tint, if any, they should have. Without lights you would either have complete darkness or you would have flat lighting (where everything has a uniform brightness and adding a tint color would uniformly tint everything on the screen).
Forward Rendering vs. Deferred Rendering vs. Light Pre-Pass Rendering

The simplest approach to drawing 3D graphics is something called forward rendering. With forward rendering, drawing consists of rendering the mesh and calculating its material and all the lights that affect the material all at the same time. The more lights you add, the more complicated your shaders become since you have to determine whether each light affects the material and if so how much. (Ok, so there’s also multi-pass forward rendering, but that has its own problems – more passes mean longer render times and thus a lower frame rate – and we wanted to keep the descriptions simple).
In the last 5 years, many games started using a technique called deferred rendering. In classic deferred rendering, there are two rendering passes. The first pass renders the positions, normals, and material values of all the meshes in the scene to something called a G-Buffer (two or more render targets); nothing is actually drawn to the screen in this first pass. The second pass uses the data from the G-Buffer (which tells us everything we need to know about the geometry that appears at each screen pixel) and combines it with the lights to create the final image that you see. By doing this, we decouple geometry and lighting. This makes it possible to add more lights to the scene with a much smaller performance impact than in forward rendering since we don’t need to create a really complex pixel shader to handle all the lights (single-pass forward rendering) or draw the geometry over and over again for each light (multi-pass forward rendering).
There are drawbacks to classic deferred rendering though. Even a minimal G-Buffer takes up quite a bit of memory and the more different types of materials you want to support, the larger the G-Buffer will need to be. Wolfgang Engel, an XNA/DirectX MVP, came up with a variation on deferred rendering which he called Light Pre-Pass Rendering. This is a three pass technique. We once again use a G-Buffer, but in this case it is smaller than the classic deferred rendering G-Buffer and can even be squeezed down to a single render target which makes it viable for graphics hardware which does not support drawing to multiple render targets at the same time.
The G-Buffer is created in the first pass by rendering all the scene geometry. It only needs to store normals and the geometry’s world position. We stored the world position of the geometry at that screen position in one render target and its normal at that screen position in second render target for simplicity.
The next pass draws the lights to a light accumulation buffer. The buffer starts out entirely dark and each light that is rendered adds brightness (and tint, if any) to the light buffer. These lighting calculations take into account the normal and world position of the geometry that is at each screen position, drawing the values from the G-Buffer, such that each light only affects the pixels it is supposed to have an impact on. In Maelstrom we ended up only using point lights (spheres of light that fade out as you get further from the light’s position), but you can use any kind of light you can imagine (spot lights and directional lights are the two other common light types). Adding more lights has a very low impact on rendering time and this kind of lighting tends to be much easier for the designer to work with since there’s no need for him or her to understand HLSL or even any complicated C++ in order to add, remove, reposition, or otherwise change any lights.
The final pass draws the geometry a second time. This time, though, all the lighting calculations are done so all we do here is just render the meshes with their appropriate materials, adjust the color values and intensities from the material based on the light buffer value, and we’re done. Each rendering style (forward, deferred, and light pre-pass) has its own benefits and drawbacks, but in this case light pre-pass was a good solution and choosing it let us show how a state-of-the-art graphics technique works.
Normal Mapping

We also incorporated normal mapping. Normal mapping makes us of a special texture (a normal map) in addition to the regular texture that a material has. Normals are values used in lighting calculations to determine how much a particular light should affect a particular pixel. If you wanted to draw a brick wall, you would typically create two triangles that lined up to form a rectangle and you would apply a texture of a brick wall to them as their material. The end result of that doesn’t look very convincing though since unlike a real brick wall there are no grooves in the mortared area between each brick since our brick and mortar is just a flat texture applied to flat triangles. We could fix this by changing from two triangles to a fully modeled mesh with actual grooves, but that would add thousands of extra vertices which would lower the frame rate.
So instead we use a normal map, which fakes it. One of the reasons that the two triangles + a brick wall texture approach doesn’t look right is because the lighting doesn’t behave correctly when compared to a real brick wall (or to a fully modeled mesh of a real brick wall). The normals point straight out perpendicular from the face of the rectangle whereas if we had the fully modeled mesh with actual grooves, the surface normals would only point straight out on the bricks themselves and they would curve along the mortared areas such that the lighting calculations would end up giving us the right levels of light and dark depending on the location and direction of the light. That’s where a normal map comes in. The normal map (which you can generate using a plugin for Adobe Photoshop or GIMP or by modeling a real brick wall in 3DSMax, Maya, or Blender which you then “bake” a normal map from) allows us to get the same lighting effect as we would with a fully modeled mesh while still keeping the simple two triangle + a brick wall texture approach that gives us really good performance for our game. There are limits to the effectiveness of normal mapping (you can’t use it to fake anything too deep and it doesn’t hold up as well if the camera can get really close to the object) but in Maelstrom it allowed us to keep the walls as simple triangles (like the two triangles + a brick wall texture example above) while making it seem like there were geometric grooves in the wall. Here’s a before and after screenshot using normal mapping:

Post-Processing Effects

We also used several post-processing effects. The first was the bloom effect. Bloom is an effect that analyzes a rendered image, identifies parts that are above a certain brightness threshold, and makes those areas brighter and adds a peripheral glow to them as well, giving it a look and feel that is similar to a neon sign or to the light cycles in the movie Tron. Here’s the same shot as above with the addition of bloom:

We also made use of two different damage effects. Whenever the player took damage, we had a reddish tinge around the edge of the screen. This was simply a full screen overlay texture that is actually white but is tinted red by the shader. It is alpha-blended over the final rendered scene and fades out over the course of a couple of seconds. Rather than fading out linearly, we use a power curve which helps to sell the effect as being more complicated than it really is.
Lastly we added in some damage particles. The particles themselves were created using a geometry shader. The vertex shader took in a series of points in world space and passed these points along to the geometry shader. The geometry shader expanded these points into two triangles by generating the missing vertices and applying the world-view-projection transformation matrix to transform the positions from world coordinates to homogeneous coordinates so that they can then be rasterized correctly by D3D and the resulting pixels passed along to the pixel shader. Once again we used a simple texture with alpha blending to simulate much more complicated geometry than we were actually drawing. In this case we also made use of a texture atlas (an image made up of smaller images) which, in conjunction with the randomizer we used to generate the initial vertices for the particles, allowed us to have several different particle textures. Like with the power curve for the damage texture, the texture atlas allowed us to make the particles seem more complex than they really were. It also let us show off the use of a geometry shader, a feature that was added in DirectX 10 and requires DirectX 10 or higher hardware.

All audio was done using the XAudio2 API. Thankfully, we were able to get a huge head start by using some of the code from the sample project we started from. The audio engine sets up the very basics of XAudio2, and then wraps that with a simpler API for the rest of the application to call.
We don’t have many sound effects, so we on startup, we load all sounds effects and music cues into a std::map, keyed on a SoundCue enum. Sounds are loaded using the Media Foundation classes, and the resulting byte data of the sound (and some format information) are stored in our SoundEffect class.

void AudioEngine::Initialize(){ m_audio = ref new Audio(); m_audio->CreateDeviceIndependentResources(); m_mediaReader = ref new MediaReader(); // Impacts m_soundMap[SoundCue::BallLaunch] = LoadSound("SoundsImpactsBallLaunch.wav"); m_soundMap[SoundCue::Buzz] = LoadSound("SoundsImpactsBuzz.wav"); m_soundMap[SoundCue::Impact1] = LoadSound("SoundsImpactsImpact1.wav"); m_soundMap[SoundCue::Impact2] = LoadSound("SoundsImpactsImpact2.wav");...}SoundEffect^ AudioEngine::LoadSound(String^ filename){ Array^ soundData = m_mediaReader->LoadMedia(filename); auto soundEffect = ref new SoundEffect(); soundEffect->Initialize(m_audio->SoundEffectEngine(), m_mediaReader->GetOutputWaveFormatEx(), soundData); return soundEffect;}When the game needs to play a sound, it simply calls the PlaySound method, passing in the cue to play, and the volume to play it at. PlaySound keys into the sound map, getting the associated SoundEffect, and plays it.

void AudioEngine::PlaySound(SoundCue cue, float volume, bool loop){ m_soundMap[cue]->Play(volume, loop);}MJPEG Cameras

To achieve the effect of seeing the opponent in stereoscopic 3D, we strapped two Axis M1014 network cameras side-by-side. Using Brian’s MJPEG Decoder library, with a special port to Windows Runtime (available soon), individual JPEG frames were pulled off each camera, and then applied to a texture at the back of the arena. The image from the left camera is drawn when DirectX renders the player’s left eye, and the frame from the right camera is drawn when DirectX renders the right eye. This is a cheap and simple way to pull off live stereoscopic 3D.

void MjpegCamera::Update(GameEngine^ engine){ if(m_decoderLeft != nullptr) UpdateTexture(m_decoderLeft->CurrentFrame, &textureLeft); if(m_decoderRight != nullptr) UpdateTexture(m_decoderRight->CurrentFrame, &textureRight); Face::Update(engine);}void MjpegCamera::Render(_In_ ID3D11DeviceContext *context, _In_ ID3D11Buffer *primitiveConstantBuffer, _In_ bool isFirstPass, int eye){ if(eye == 1 && textureRight != nullptr) m_material->SetTexture(textureRight.Get()); else if(textureLeft != nullptr) m_material->SetTexture(textureLeft.Get()); GameObject::Render(context, primitiveConstantBuffer, isFirstPass);}With the distance between the cameras being about the distance of human eyes (called the intra-axial distance), the effect works pretty well!


The Tablet controller is the touch screen that lets the player control their 3D paddle in the Game Console app. For this part of the game system, there wasn't a reason to dive deep into DirectX and C++ since the controller is neither stereoscopic or visually intense, so we kept things simple with C#.
Since the controller would also serve as our attract screen in the podium to entice potential players, we wanted to have the wait screen do something eye-catching. However, if you are moving from C# in WPF to C# and XAML in WinRT and are used to taking advantage of some of the more common "memory hoggish UX hacks" from WPF, you'll quickly find them absent in WinRT! For example, we no longer have OpacityMask, non-rectangular clipping paths or the ability to render a UIElement to a Bitmap. Our bag of UX tricks may be in need of an overhaul. However, what we do get in C# / XAML for WinRT is Z rotation, which is something we've had in Silverlight but I personally have been begging for in WPF for a long time.
Therefore, the opening animation in the controller is a procedurally generated effect that rotates PNG "blades" in 3D space, creating a very compelling effect. Here is how it works. The Blade user control is a simple canvas that displays one of a few possible blade images. The Canvas has a RenderTransform to control the scale and rotation and a PlaneProjection which allows us to rotate the blade graphic in X, Y and Z space.

Each Blade is added dynamically to the Controller when the Tablet application first loads, stored in a List to have it's Update() method called during the CompositionTarget.Rendering() loop.

protected override void OnNavigatedTo(NavigationEventArgs e){ canvas_blades.Children.Clear(); _blades.Clear(); for (int i = 0; i < NumBlades; i++) { Blade b = new Blade { X = 950.0, Y = 530.0 }; int id = _rand.Next(0, 5); b.SetBlade(id); b.Speed = .1 + id * .1; SeedBlade(b); _blades.Add(b); canvas_blades.Children.Add(b); }}void CompositionTarget_Rendering(object sender, object e){ if(_inGame) { paddle.Update(); } else if(_isClosing) { foreach (Blade b in _blades) b.UpdateExit(); } else { foreach (Blade b in _blades) b.Update(); }}Since each Blade has been assigned an individual speed and angle of rotation along all three axis, they have a very straightforward Update function. The reason we keep the rotation values between -180 and 180 during the spinning loop is to make it easier to spin them out zero when we need them to eventually leave the screen.

public void Update(){ _rotX += Speed; _rotZ += Speed; _rotY += Speed; if (_rotX > 180) _rotX -= 360.0; if (_rotX < -180) _rotX += 360.0; if (_rotY > 180) _rotY -= 360.0; if (_rotY < -180) _rotY += 360.0; if (_rotZ > 180) _rotZ -= 360.0; if (_rotZ < -180) _rotZ += 360.0; projection.RotationX = _rotX; projection.RotationY = _rotY; projection.RotationZ = _rotZ;}public void UpdateExit(){ _rotX *= .98; _rotZ *= .98; _rotY += (90.0 - _rotY) * .1; projection.RotationX = _rotX; projection.RotationY = _rotY; projection.RotationZ = _rotZ;}Network

To continue the experiment of blending C# and C++ code, the network communication layer was written in C# as a Windows Runtime component. Two classes are key to the system: SocketClient and SocketListener. Player one’s game console starts a SocketListener to listen for incoming connections from each game controller, as well as player two’s game console. Each of those use a SocketClient object to make the connection.
In either case, once the connection is made, the client and the listener sit and wait for data to be transmitted. Data must be sent as an object which implements our IGamePacket interface. This contains two important methods: FromDataReaderAsync and WritePacket. These methods serialize and deserialze the byte data to/from an IGameState packet of whatever type is specified in the PacketType property.

namespace Coding4Fun.Maelstrom.Communication{ public enum PacketType { UserInputPacket = 0, GameStatePacket } public interface IGamePacket { PacketType Type { get; } IAsyncAction FromDataReaderAsync(DataReader reader); void WritePacket(DataWriter writer); }}The controllers write UserInputPackets to the game console, consisting of X,Y positions of the paddle, as well as whether the player has tapped the screen to begin.

public sealed class UserInputPacket : IGamePacket{ public PacketType Type { get { return PacketType.UserInputPacket; } } public UserInputCommand Command { get; set; } public Point3 Position { get; set; }}Player one’s game console writes a GameStatePacket to player' two’s game console, which consists of the positions of each paddle, each ball, the score, and which tiles are lit for the ball splitter power up. Player two’s Update and Render methods use this data to draw the screen appropriately.

The hardware layer of this project is responsible for two big parts. One is a rumble effect that fires every time the player is hit, and the other is a lighting effect that changes depending on the game state.
As all good programmers do, we reused code from another project. We leveraged the proven web server from Project Detroit for our Netduino, but with a few changes. Here, we had static class “modules” which knew how to talk to the physical hardware, and “controllers” which handled items like a player scoring, game state animations, and taking damage. Because the modules are static classes, we can have them referenced in multiple classes without issue.
NETMF Web Server

When a request comes in, we perform the requested operation, and then return a new line character to verify we got the request. If you don’t return any data, some clients will actually fire a second request which then can cause some odd behaviors. The flow is as follows:

Parse the URLGet the target controllerExecute the appropriate action

private static void WebServerRequestReceived(Request request){ var start = DateTime.Now; Logger.WriteLine("Start: " + request.Url + " at " + DateTime.Now); try { var data = UrlHelper.ParseUrl(request.Url); var targetController = GetController(data); if (targetController != null) { targetController.ExecuteAction(data); } } catch (Exception ex0) { Logger.WriteLine(ex0.ToString()); } request.SendResponse(NewLine); Logger.WriteLine("End: " + request.Url + " at " + DateTime.Now + " took: " + (DateTime.Now - start).Milliseconds);}public static IController GetController(UrlData data){ if (data.IsDamage) return Damage; if (data.IsScore) return Score; if (data.IsGameState) return GameState; // can assume invalid return null;}Making It Shake

We used a Sparkfun MP3 trigger board, a subwoofer amplifier, and bass rumble plates to create this effect. The MP3 board requires power, and two jumpers to cause the MP3 to play. It has an audio jack that then gets plugged into the amplifier which powers the rumble plates.
From here, we just needed to wire a ground to the MP3 player’s ground pin, and the target pin on the MP3 player to a digital IO pin on the Netduino. In the code, we declare it as an OutputPort and give it an initial state of true. When we get a request, we toggle the pin on a separate thread.

private static readonly OutputPort StopMusic = new OutputPort(Pins.GPIO_PIN_D0, true);private static readonly OutputPort Track1 = new OutputPort(Pins.GPIO_PIN_D1, true);// .. more pinspublic static void PlayTrack(int track){ switch (track) { case 1: TogglePin(Track1); break; // ... more cases default: // stop all, invalid choice TogglePin(StopMusic); break; }}public static void Stop(){ TogglePin(StopMusic);}private static void TogglePin(OutputPort port){ var t = new Thread(() => { port.Write(false); Thread.Sleep(50); port.Write(true); }); t.Start();}Lighting Up the Room

For lighting, we used some RGB Lighting strips. The strips can change a single color and use a PWM signal to do this. This is different than the lighting we used in Project Detroit which allowed us to individually control each LED and used SPI to communicate. We purchased an RGB amplifier to allow a PWM signal to power a 12 volt strip. We purchased ours from US LED Supply and the exact product was RGB Amplifier 4A/Ch for interfacing with a Micro-Controller (PWM/TTL Input).
We alter the Duty Cycle to shift the brightness of the LEDs and do this on a separate thread. Below is a stripped down version of the lighting hardware class.

public static class RgbStripLighting{ private static readonly PWM RedPwm = new PWM(Pins.GPIO_PIN_D5); private static readonly PWM GreenPwm = new PWM(Pins.GPIO_PIN_D6); private static readonly PWM BluePwm = new PWM(Pins.GPIO_PIN_D9); private const int ThreadSleep = 50; private const int MaxValue = 100; const int PulsePurpleIncrement = 2; const int PulsePurpleThreadSleep = 100; private static Thread _animationThread; private static bool _killThread; #region game state animations public static void PlayGameIdle() { AbortAnimationThread(); _animationThread = new Thread(PulsePurple); _animationThread.Start(); } #endregion private static void PulsePurple() { while (!_killThread) { for (var i = 0; i = 0; i -= PulsePurpleIncrement) { SetPwmRgb(i, 0, i); } Thread.Sleep(PulsePurpleThreadSleep); } } private static void AbortAnimationThread() { _killThread = true; try { if(_animationThread != null) _animationThread.Abort(); } catch (Exception ex0) { Debug.Print(ex0.ToString()); Debug.Print("Thread still alive: "); Debug.Print("Killed Thread"); } _killThread = false; } private static void SetPwmRgb(int red, int green, int blue) { // typically, 0 == off and 100 is on // things flipped however for the lighting so building this in. red = MaxValue - red; green = MaxValue - green; blue = MaxValue - blue; red = CheckBound(red, MaxValue); green = CheckBound(green, MaxValue); blue = CheckBound(blue, MaxValue); RedPwm.SetDutyCycle((uint) red); GreenPwm.SetDutyCycle((uint) green); BluePwm.SetDutyCycle((uint) blue); Thread.Sleep(ThreadSleep); } public static int CheckBound(int value, int max) { return CheckBound(value, 0, max); } public static int CheckBound(int value, int min, int max) { if (value = max) value = max; return value; }}Conclusion

We built this experience over the course of around 4 to 5 weeks. It was our first DirectX application in a very long time, and our first C++ application in a very long time. However, we were able to pick up the new platform and language changes fairly easily and create a simple, yet fun game in that time period.

Music track - "FatLoad- The Bullet(no master)" by FreaK NeoSSound effects + music edition - David WallimannDirectX shaders - Michael McLaughlin


Hi, I was wondering if someone could help me with a display problem (or at least tell me what might be going on).
I just bought a Acer S231HL LCD monitor for my desktop (Gateway SX2850-01).
When attached with the VGA cable, the colors look fine. The blacks are deep and the colors bright. However, when I switch to HDMI, the black background looks grey, almost as if the brightness is too high. All of the other colors look washed out as well.
I have tried changing the settings (brightness, contrast etc) in the control panel settings, as well as through the monitor, but nothing helps. If I lower the brightness all the way, everything turns dark, but the background is still grey!
I looked for an updated driver, but could not find one.
I have also tried a new HDMI cable, but that did not work either.
It is driving me crazy that I can only use this with VGA!

Thank you for any help you can offer!


I have new Sony CW laptop running Windows 7 (64-bit) with an Nvidia GeForce GT 230m graphics card.

The tiny problem I have is that I can't adjust the video settings (colour/contrast/brightness etc.) in Windows Media Player 12 as I am used to being able to do in previous versions. I use this function all the time and it is not enough for me to simply change the global settings as I know it should be possible within Windows Media Player.

I updated the Nvidia driver and downloaded the K-Lite codec pack (too complicated - didn't know which codecs to adjust) with no result. The Nvidia control panel settings are all set to 'use the video player's adjustment controls' and I have had no luck adjusting any of the options.

I have read online about video overlay, but I cannot find any options for this in the Nvidia control panel, and I don't even know if that is the problem. I also looked at the Display options in the Control Panel but nothing worked.

Any help would be greatly appreciated.


**Additional Information**

I discovered that the Brightness/Contrast controls do work in Quicktime and KMPlayer, but I've still had no luck achieving the same with Windows Media Player 12. I downloaded the Windows 7 Codec Pack to no effect. Here is a list of the codecs installed according to the technical info on Windows Media Player:

Microsoft RLE
Microsoft Video 1
Microsoft YUV
Intel IYUV codec IYUV
Toshiba YUV Codec
Cinepak Codec by Radius
Xvid MPEG-4 Codec
DivX 6.8.5 Codec (2 Logical CPUs)
DivX 6.8.5 YV12 Decoder
ffdshow Video Codec
VP60® Simple Profile
VP61® Advanced Profile
VP62® Heightened Sharpness Profile
Mpeg4s Decoder
WMV Screen decoder
WMVideo Decoder
Mpeg43 Decoder
Mpeg4 Decoder

If this isn't the correct forum, I apologize and expect this post will be moved to the appropriate forum.

I'm wanting to know if there is a way to at least get the colors that are printed to be the same or at least close to what I see on my Monitor. I am assuming that my Monitor is displaying the colors correctly(SamSung SyncMaster 940BF) and my printer (Epson Stylus Photo R200) is off. I do have the latest drivers for the printer. It makes no difference whether I am printing in Vista or XP.

For example, reds are pinkish and yellows are also pinkish, but black and white are fine. I get the same results using different media too, e.g., bright white paper, photo stock or printable CD/DVD media. I've tried different settings in the printer driver, e.g., photo, text and photo, etc., but there is no change. For years I have been aware of the "Color Management" section in the Video drivers but don't know anything about them but still wonder if they are where I should make any changes or perhaps I should be looking at the video card drivers. My video card is an EVGA GeForce 7600 GT, also with the latest Nvidia drivers. Something has obviously changed, but what I haven't a clue since this issue has just recently started.

Can someone hopefully at least put me on the right track to correct this problem?


Oh how I shudder when I see that dread message appear when I open a file in Word 97.

In the past, I've run the file through a text-only program such as Notepad. The output is clean but unformatted.

This time, the allegedly-corrupt file is too large for that. So I started to tinker. I found that, as usual, the engineers had managed to contaminate the template. So I went through my usual decon routine, and in the process may have stumbled into the problem!

When I replace the styles Heading 1 through Heading 9 with my known-good versions, I get a message "There is not enough memory or disk space to update the display." Then the styles appear, but with the level numbering wrong on the first three headings and no numbering at all on the remainder. If I try to correct the heading styles again, I get the *same* message, only this time I lose the numbering altogether, and all further attempts to alter the numbering are totally ignored!

Is that weird or what?

Anybody got a bright idea for this one?

Cheers, 3MP

Well, guys, I couldn't resist. I broke down and bought the m505 Palm Pilot because I hate the display on my Vx and didn't like the size and weight of the other color models. Plus, I did not want to deal with Windows CE on my handheld.

First off, the display without backlighting is pathetic. Ahhh, but WITH backlighting, it is very nice indeed. The 64K colors are rich without being overwhelming, although I can hardly wait to see what some of the more serious graphics look like on it. You can't control the brightness, but that doesn't appear to be a real problem, at least not on mine. Backlighting is OFF by default and you have to turn it on by holding the power button down.

The feel is very different from earlier Palms. The screen is springier and seems to have an overlay over the glass, so it has an odd, rubbery feel at first. The lightweight plastic stylii don't work well on it at all. You need the heft of one of the metal ones. Oh, and they only give you one with this critter, unlike the two that came with my Vx.

The biggest gotcha? The thing assumes your PC has a usable USB port. Yep, there is NO way to connect directly to a serial port out of the box, which leaves NT and Windows 95 users out in the cold. Of course, there is an insert in the box that says you can order a serial cradle (for extra bucks, naturally). However, even Palm doesn't have a supply of the things (they expect a biiiiig shipment next week!). Expansion cards, accessories, peripherals? They're all backordered everywhere you look.

The upshot is that mine looks pretty, but I can't even play solitaire on it for at least a week or two because I can't sync it without a cradle or serial cable (that item's backordered too) so I can't load any of the software I need on it. I think I like it, but a final opinion will have to wait until I can actually load Documents to Go on it and get my address book moved over from the Vx.

I'll keep you posted.

I've installed IE 9 and now while everything displays normally, when I go to any of the menus (like File) it is too dim to read.
Other Windows 7 applications are OK.

Is there a setting somewhere for menu brightness?