Monday, August 29, 2011

Why Apple is on top today: the top 10 technology decisions

As Apple overtook Microsoft in market cap and as Steve Jobs reminisced about some recent history, I thought I’d reflect on some of the decisions that brought Apple to the pinnacle of technology companies.  The criteria I used to select these is how improbable and hence courageous they were when taken and how much impact they have had on the industry. Since the impact of these decisions could not be felt for a long time, the courage required to act early is all the more remarkable.
At the time they were made, none of these decisions did anything to move the stock price or cause great rejoicing. In fact, in many cases the decisions were ridiculed by those who should know better. Yet each one became a massive pillar of the foundation of Apple as it is today.  As you read through, think of the decisions that Apple competitors made or did not make in the same time frame.
Top 10 Apple technology decisions of the 2000 decade in reverse order:
    10. HTML 5 (canvas). It was initially introduced by Apple for use inside their own Mac OS X WebKit component, powering applications like Dashboard widgets and the Safari browser. Still in its infancy, the canvas element is an Apple technology that promises to finally offer a credible Flash equivalent.  Remember that Flash is now over 10 years old and was incubated at a time when the Web was barely 1.0.  HTML canvas finally brings vector graphics to the modern era.
    9. H.264. The decision to support a standard video codec at a time when the industry was mostly arguing over whether Blu-ray or HD-DVD would win signaled a foresight that physical media was not long for this world.  The consequences are still being weighed as YouTube and other media sources are shifting their inventory to this format.
    8. iTunes. iTunes started as a personal music database, grew into a music and media store-front, a payment processing engine, device synchronization and updating center, and finally an application store.  It was forked into both a PC and a mobile version.  Without iTunes the iPod would have been just another MP3 player.
    7. WiFi. Implemented in Airport before the spec was finished, WiFi gave the laptop wings.  Think back to when you still had to plug a wire into a computer to have it communicate. PC’s did not catch up in being wireless for years.  WiFi was even a rare feature on mobile phones when the iPhone shipped in 2007 with some operators (Verizon) banning it from their phones even in 2009.
    6. FireWire. Along with iTunes brought iPod to life.  Launched at a time when external drives required screwdrivers and a circuit board to install, FireWire made opening your computer case to expand it as common as opening the hood of your car to fix it.
    5. iLife. Did to user-generated media what word processors did to words and spreadsheets did to numbers. A singularly great reason to get a Mac.
    4. Portability. OSX migrated across three different CPU architectures in less than 10 years. Apple revealed that they built it from day one to be portable to different CPUs.  That took amazing foresight in the late 90s.
    3. WebKit. Speed and flexibility.  The biggest reason why we can surf on a phone today.
    2. Multi-Touch UI. Seven years in development and still sublime.  Starting down this road in 2003 must have seemed science fiction.  But, unlike other companies, Apple took this science experiment and went to market.  Steve Jobs said how he wanted a screen keyboard and how scrolling opened his eyes to making a phone. That took vision.
    1. OSX/Cocoa. The company’s backbone.  Everything above hangs off OSX. Remarkably scaleable, portable, robust and reliable.  With roots going back decades, it’s the canonical OS. Somebody should build a monument to it.

The top 10 strategic technologies for 2010 include:


Cloud Computing. Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. Cloud-based services can be exploited in a variety of ways to develop an application or a solution. Using cloud resources does not eliminate the costs of IT solutions, but does re-arrange some and reduce others. In addition, consuming cloud services enterprises will increasingly act as cloud providers and deliver application, information or business process services to customers and business partners.
Advanced Analytics. Optimization and simulation is using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution. This can be viewed as a third step in supporting operational business decisions. Fixed rules and prepared policies gave way to more informed decisions powered by the right information delivered at the right time, whether through customer relationship management (CRM) or enterprise resource planning (ERP) or other applications. The new step is to provide simulation, prediction, optimization and other analytics, not simply information, to empower even more decision flexibility at the time and place of every business process action. The new step looks into the future, predicting what can or will happen.
Client Computing. Virtualization is bringing new ways of packaging client computing applications and capabilities. As a result, the choice of a particular PC hardware platform, and eventually the OS platform, becomes less critical. Enterprises should proactively build a five to eight year strategic client computing roadmap outlining an approach to device standards, ownership and support; operating system and application selection, deployment and update; and management and security plans to manage diversity.
IT for Green. IT can enable many green initiatives. The use of IT, particularly among the white collar staff, can greatly enhance an enterprise’s green credentials. Common green initiatives include the use of e-documents, reducing travel and teleworking. IT can also provide the analytic tools that others in the enterprise may use to reduce energy consumption in the transportation of goods or other carbon management activities.
Reshaping the Data Center. In the past, design principles for data centers were simple: Figure out what you have, estimate growth for 15 to 20 years, then build to suit. Newly-built data centers often opened with huge areas of white floor space, fully powered and backed by a uninterruptible power supply (UPS), water-and air-cooled and mostly empty. However, costs are actually lower if enterprises adopt a pod-based approach to data center construction and expansion. If 9,000 square feet is expected to be needed during the life of a data center, then design the site to support it, but only build what’s needed for five to seven years. Cutting operating expenses, which are a nontrivial part of the overall IT spend for most clients, frees up money to apply to other projects or investments either in IT or in the business itself.
Social Computing. Workers do not want two distinct environments to support their work – one for their own work products (whether personal or group) and another for accessing “external” information. Enterprises must focus both on use of social software and social media in the enterprise and participation and integration with externally facing enterprise-sponsored and public communities. Do not ignore the role of the social profile to bring communities together.
Security – Activity Monitoring. Traditionally, security has focused on putting up a perimeter fence to keep others out, but it has evolved to monitoring activities and identifying patterns that would have been missed before. Information security professionals face the challenge of detecting malicious activity in a constant stream of discrete events that are usually associated with an authorized user and are generated from multiple network, system and application sources. At the same time, security departments are facing increasing demands for ever-greater log analysis and reporting to support audit requirements. A variety of complimentary (and sometimes overlapping) monitoring and analysis tools help enterprises better detect and investigate suspicious activity – often with real-time alerting or transaction intervention. By understanding the strengths and weaknesses of these tools, enterprises can better understand how to use them to defend the enterprise and meet audit requirements.
Flash Memory. Flash memory is not new, but it is moving up to a new tier in the storage echelon. Flash memory is a semiconductor memory device, familiar from its use in USB memory sticks and digital camera cards. It is much faster than rotating disk, but considerably more expensive, however this differential is shrinking. At the rate of price declines, the technology will enjoy more than a 100 percent compound annual growth rate during the new few years and become strategic in many IT areas including consumer devices, entertainment equipment and other embedded IT systems. In addition, it offers a new layer of the storage hierarchy in servers and client computers that has key advantages including space, heat, performance and ruggedness.
Virtualization for Availability. Virtualization has been on the list of top strategic technologies in previous years. It is on the list this year because Gartner emphases new elements such as live migration for availability that have longer term implications. Live migration is the movement of a running virtual machine (VM), while its operating system and other software continue to execute as if they remained on the original physical server. This takes place by replicating the state of physical memory between the source and destination VMs, then, at some instant in time, one instruction finishes execution on the source machine and the next instruction begins on the destination machine.
However, if replication of memory continues indefinitely, but execution of instructions remains on the source VM, and then the source VM fails the next instruction would now place on the destination machine. If the destination VM were to fail, just pick a new destination to start the indefinite migration, thus making very high availability possible. 
The key value proposition is to displace a variety of separate mechanisms with a single “dial” that can be set to any level of availability from baseline to fault tolerance, all using a common mechanism and permitting the settings to be changed rapidly as needed. Expensive high-reliability hardware, with fail-over cluster software and perhaps even fault-tolerant hardware could be dispensed with, but still meet availability needs. This is key to cutting costs, lowering complexity, as well as increasing agility as needs shift.
Mobile Applications. By year-end 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing a rich environment for the convergence of mobility and the Web. There are already many thousands of applications for platforms such as the Apple iPhone, in spite of the limited market and need for unique coding. It may take a newer version that is designed to flexibly operate on both full PC and miniature systems, but if the operating system interface and processor architecture were identical, that enabling factor would create a huge turn upwards in mobile application availability.
“This list should be used as a starting point and companies should adjust their list based on their industry, unique business needs and technology adoption mode,” said Carl Claunch, vice president and distinguished analyst at Gartner. “When determining what may be right for each company, the decision may not have anything to do with a particular technology. In other cases, it will be to continue investing in the technology at the current rate. In still other cases, the decision may be to test/pilot or more aggressively adopt/deploy the technology.”

Windows Server 2008 : Top 10 features

#10: The self-healing NTFS file system. Ever since the days of DOS, an error in the file system meant that a volume had to be taken offline for it to be remedied. In WS2K8, a new system service works in the background that can detect a file system error, and perform a healing process without anyone taking the server down.

#9: Parallel session creation. "Prior to Server 2008, session creation was a serial operation," Russinovich reminded us. "If you've got a Terminal Server system, or you've got a home system where you're logging into more than one user at the same time, those are sessions. And the serialization of the session initialization caused a bottleneck on large Terminal Services systems. So Monday morning, everybody gets to work, they all log onto their Terminal Services system like a few hundred people supported by the system, and they've all got to wait in line to have their session initialized, because of the way session initialization was architected

#8: Clean service shutdown. One of Windows' historical problems concerns its system shutdown procedure. In XP, once shutdown begins, the system starts a 20-second timer. After that time is up, it signals the user whether she wants to terminate the application herself, perhaps prematurely. For Windows Server, that same 20-second timer may be the lifeclock for an application, even one that's busy spooling ever-larger blocks of data to the disk.

#7: Kernel Transaction Manager. This is a feature which developers can take advantage of, which could greatly reduce, if not eliminate, one of the most frequent causes of System Registry and file system corruption: multiple threads seeking access to the same resource.


In a formal database, a set of instructed changes is stored in memory, in sequence, and then "committed" all at once as a formal transaction. This way, other users aren't given a snapshot of the database in the process of being changed - the changes appear to happen all at once. This feature is finally being utilized in the System Registry of both Vista and Windows Server 2008.


"The Kernel Transaction Manager [intends] to make it very easy to do a lot of error recovery, virtually transparently," Microsoft software engineer Mark Russinovich explained. "The way they've done this is with the [KTM] acting as a transaction manager that transaction clients can plug into. Those transaction clients can be third-party clients that want to initiate transactions on resources that are managed by Transaction Resource Manager - those resource managers can be third-party or built into the system

#6: SMB2 network file system. Long, long ago, SMB was adopted as the network file system for Windows. While it was an adequate choice at the time, Russinovich believes, "SMB has kind of outlived its life as a scalable, high-performance network file system."
So SMB2 finally replaces it. With media files having attained astronomical sizes, servers need to be able to deal with them expeditiously. Russinovich noted that in internal tests, SMB2 on media servers delivered thirty to forty times faster file system performance than Windows Server 2003. He repeated the figure to make certain we realized he meant a 4000% boost.



#5: Address Space Load Randomization (ASLR) Perhaps one of the most controversial added features already, especially since its debut in Vista, ASLR makes certain that no two subsequent instances of an operating system load the same system drivers in the same place in memory each time

#4: Windows Hardware Error Architecture (WHEA). That's right, Microsoft has actually standardized the error - more accurately, the protocol by which applications report to the system what errors they have uncovered. You'd think this would already have been done

#3: Windows Server Virtualization. Even pared down a bit, the Viridian project will still provide enterprises with the single most effective tool to date for reducing total cost of ownership...to emerge from Microsoft. Many will argue virtualization is still an open market, thanks to VMware; and for perhaps the next few years, VMware may continue to be the feature leader in this market.

#2: PowerShell. At last. For two years, we've been told it'll be part of Longhorn, then not really part of Longhorn, then a separate free download that'll support Longhorn, then the underpinning for Exchange Server 2007. Now we know it's a part of the shipping operating system: the radically new command line tool that can either supplement or completely replace GUI-based administration

#1: Server Core. Here is where the world could really change for Microsoft going forward: Imagine a cluster of low-overhead, virtualized, GUI-free server OSes running core roles like DHCP and DNS in protected environments, all to themselves, managed by way of a single terminal

Top 10 in Windows Server 2008 R2

#10  Migration Tools:  Starting off my Top 10 countdown are the migration tools available for Windows 2008 R2.  Okay, so who gets excited about migration tools?  Considering Windows 2008 R2 comes as a 64-bit only operating system and there’s no inplace upgrade path from 32-bit to 64-bit, the release of Windows 2008 R2 requires tools to help organizations “migrate” server to server than just shove in a CD and do an inplace upgrade.  Because of that, Microsoft made some GREAT tools (and for any org than plans to migrate from physical Windows 2003 hardware to virtualized Windows 2008 R2 guest images, this is the PERFECT way to go from physical to virtual!!!)  Go to http://www.microsoft.com/migration for a link to the various migration tools. There are tools that help you migrate fileservers (including files and ACLs), tools that help you migrate RRAS servers to 2008 R2, print server migration tools shifting your printers and print queues to 2008 R2.  My favorite migration tool is the DHCP migration tool that migrates not only scopes, but also LEASES from old Windows DHCP servers to Windows 2008 R2 servers!  (do you realize what that means?  You can migrate a DHCP server in the middle of a day, carry over DHCP leases without having to expire out leases from the old server to get DHCP activated on a new server! (sorry, we were really excited when this tool came out, and to this day, I still get excited about sharing this!!!))

#9  Active Directory 2008 R2: Number 9 on my list are updates to Active Directory.  Gotta start off by saying that no one really “has” to migrate to AD/2008 or AD/2008 R2 for any of the current products, so things like Exchange 2010, SharePoint 2010, etc do NOT require AD/2008 (or 2008 R2).  We have a LOT of customers who are happily running AD/2003 in Native Mode with all of the latest and greatest products running.  However, for those who want enhancements in AD, the biggies in 2008 R2 are the Recycle Bin (effectively allows you to simply recover deleted objects in AD, so if you fat finger delete a user object, accidentally overwrite an AD group, simply go to the recycle bin and undelete stuff…).  Also in AD/2008 R2 is  Offline Domain Join which allows you to pre-stage create a computer account in AD, dump an XML file and then when you install Windows 7 on the computer you can run a DJoin command on the Windows 7 and “join” the domain on that Win7 computer without the Win7 computer even being attached to the network!  That way you can build systems in the lab and “join them” to AD without actually / physically connecting the computers to AD.  Okay, another geek moment, but this is great when we’re prestaging systems to roll out in another site or domain and we don’t even need to be physically at or physically connected to that domain…  Oh, and something that I’m still excited about that’s in AD/2008 is Fine Grain Password Policies.  In AD/2003 you could only have 1 password policy per domain (upper case, complex password, change every 30-days, etc had to be the SAME for everyone in the domain).  With Fine Grain Passwords added to AD/2008 (and carried over to AD/2008 R2) you can now set password policies “per group” so you can have folks in HR or Accounting change their passwords every 20-days to please the regulators, and field support and sales people can change their passwords say ever 60-days or something.  All done by groups, really slick!!!

#8  Remote Server Manager:  Alright, #8 on my Top 10 countdown is the ability to remotely manage other Windows 2008 R2 servers using the Server Manager tool.  With Windows 2008 you had this really great tool “Server Manager” that allowed you to Add Roles, Features, Administer the servers, etc from a single console, however it was ONLY for the system you were on, so you had to constantly Remote Desktop into other servers.  Now with Windows 2008 R2 servers you can remotely access Server Manager on other systems.  So just sit at one console and reach into other servers in your network to do day to day administrative tasks!

#7  Direct Access:  Okay, DirectAccess, probably gets my award for “most innovative technology” in Windows 7 client and Windows 2008 R2 server and would have been closer to #1 in my countdown if it weren’t so complicated to implement.  So DirectAccess is a technology that effectively does away with VPNs.  Just like RPC/HTTPS (Outlook Anywhere) eliminated the need to VPN from Outlook to Exchange for your email a few years ago, DirectAccess does away with VPNs by giving you access to “everything else” on your network like your F> and K> drive shares, http:// SharePoint shares, accounting software, CRM software, etc.  Basically “anything” you normally have access to from a VPN, you can now access “natively” from a Windows 7 client.  DirectAccess leverages IPSec policies and Certificates to “automatically” tunnel a Windows 7 client into the network.  Effectively a client that has DirectAccess configured can simply turn on their laptop or desktop computer, get an Internet connection, and over encrypted IPSec re-establish normal network connections, but “outside” the network.  AND, your internal network doesn’t have all be Windows 2008 R2, just a single server in the DMZ needs to be running Windows 2008 R2 as a “proxy” that effectively encrypts communications between the client and this one 2008 R2 server.  Everything else “inside” your network can be just plain old TCP networking like Windows 2003, SharePoint, Linux, etc…  Okay, so here’s the catch, the client systems need to be Windows 7 (not a biggie, a lot of orgs have already started their migration to Win7 clients).  You need to enable IPv6 on the 2008 R2 proxy/gateway server and IPv6 on all Win7 clients (also not a biggie because it’s on by default since Win2008 and Vista, although you need to now understand how IPv6 works to do the proper addressing, so that’s a learning experience to figure out).  You need to be running Microsoft Certification Authority (CA), which for many orgs is also not a big deal as they’ve been running Microsoft’s CA for a while, however if you haven’t setup Microsoft’s CA, kind of familiar with Auto-enrollment of certificates to automatically push out certs using AD, this is something new for you to learn.  If you’re already doing Auto-enrollment, you’re set!  And then IPSec and split DNS, this is the technical pieces that everyone gets wrong and Microsoft’s whitepaper guide on DA is not very helpful.  We took time writing this portion of the chapter of my Windows 2008 R2 book as once you get this working, then DA actually works!  So, a REALLY slick technology once you get it working.  I’d recommend any hardcore techie to throw this in your lab to fiddle with, it’s a great technology to understand and ultimately implement!

#6  SConfig in ServerCore: So for #6 on my countdown is SConfig.exe in ServerCore.  So how many of you actually installed Microsoft’s GUI-less ServerCore when it came out in Windows 2008?  Who enjoyed the “net user administrator…”, “netdom rename computer…”, “netdom join..” commands to even get a ServerCore system assigned an IP address and joined to a domain before you could even do anything?  It was a pain and the reason that a lot of people gave up on ServerCore.  So with Windows 2008 R2, Microsoft came up with a utility called “Sconfig.exe” that you run after you install ServerCore.  Now from the DOS prompt thing, you just type Sconfig and a “menu” (text based) shows up on screen.  You walk the menu to name your server, give it an IP address, domain a domain, and most importantly set it so you can run Remote Server Manager (see #8 on my list) to remotely manage the server.  So you can now have a simple menu to get the basics going, and then remote into the system and use the Server Manager GUI to do the rest!  Where I can count the number of ServerCore systems we installed in Windows 2008 on one hand, I can now say we’re deploying a couple dozen ServerCore systems a week these days because of these new tools built-in to Windows 2008 R2!

#5  Improvements in Group Policy Management:  Okay, most of this is Windows 2008 stuff on the Group Policies, but never the less, what Microsoft has done with Group Policies in Windows 2008 (and 2008 R2) has been awesome, so it landed in the #5 spot on my Top 10!  So the minute you launch the Group Policy Management Console (GPMC) you’ll notice not just the Computer Configuration container and the User Configuration container, but under the Computer and User containers are “Policies” and “Preferences”.  The Policies container is the same container that has been in AD all along where you have containers for Account Policies, Windows Settings, Administrative Tools, Security, etc.  But under the “Preferences” is a whole new set of “views” to policies.  For some 1000+ policies, instead of more text based “descriptions” of stuff, there’s a GUI for you to “see” a user Control Panel type stuff where you can click through the GUI to “set” settings.  When you set the settings and click OK, you’re effectively creating the group policy.  So for things like Internet Explorer settings, you just click the checkbox or option on screen, and those settings are set.  Or you can do drive mappings through a GUI, or set display settings through a GUI.  This whole Preferences area REALLY makes setting policies easier.  It’s just like you are in Control Panel on your workstation, but instead what you choose are set for the “policy” for the managed systems…
 
#4  Clustering of Print Servers and DHCP:  We’re at #4 on my countdown and it’s about clustering.  So with Windows 2008 R2, you can cluster everything like you used to (fileservers, app servers, etc) but they’ve added a whole bunch of other things to cluster.  My 2 favorite new clustered features are clustering print servers and DHCP servers…  So how many times have you had a print server printer service stop and all of the print queues on the system go offline.  A simple restart of the service gets you going again, but now you have hundreds of print jobs backed up…  I would have never taken 2 hardware systems and cluster print services, that was overkill of hardware, but with virtualization, heck yeah, put a print server as a guest session one physical host, and cluster it with a guest session on another physical host.  I now have full redundancy on a print service with effectively zero downtime!  And the other cluster service that has been really slick is clustering DHCP.  Do you know for the past 20 years we’ve been doing DHCP “split scopes” the whole 80/20 or 60/40 split scope across servers, which really wasn’t fault tolerance, it was moreso just minimizing our risk that when a DHCP server went offline that we were still able to limp along.  Now with virtual guest sessions, I can CLUSTER the DHCP servers with 100% of my IP addresses.  I have two servers issuing IP addresses in perfect unison.  If I lose one server, I have another DHCP server continuing exactly where I left off.  I can failover and patch/update a server back and forth.  This completely changes (and drastically improves) something as simple as DHCP…

#3  Remote Desktop Services:  Number 3 on my list is Remote Desktop Services, or RDS, which used to be called Terminal Services.  Every administrator has used Terminal Services / Remote Desktop to reach into a server to remotely administer / manage a server, and a number of orgs have used Terminal Services and Citrix for our client systems.  With Windows 2008 R2, we’ve really found orgs can get rid of Citrix and just use the straight features out of 2008 R2 because why did people buy Citrix?  It was because Citrix provided Single Sign-on (so you didn’t have to enter in your password to logon to Citrix), better remote printing offered by Citrix, high availability offered in Citrix, and the ability to just drop an application icon on a user’s desktop and give remote access to an “application” and not have to do the full Remote Desktop with the 2nd start button and everything.  With Windows 2008 R2 (actually with Windows 2008) ALL of these features are native to Remote Desktop Services out of the box.  So as long as your client system is running XP SP3 or higher, you get the single sign-on so you never have to type a logon/password to get access to a RDS session.  You can run RDS “RemoteApp” to simply launch an application right from your client system.  Also added in Windows 2008 R2 is Virtual Desktop Infrastructure (VDI) which provides you the ability to give personal desktop sessions (like Hyper-V guest sessions) to individual users for a full desktop experience.  All of this is something you used to have to go to 2 or 3 other vendors for these features (Citrix and VMware) that are now included out of the box in Windows 2008 R2.  Check these features out!

#2  Hyper-V R2: Alright, down to #2 and Hyper-V server virtualization hits my #2 spot…  What can I say, just a couple years ago 100% of my server virtualization business was VMware, they dominated the whole virtualization world and in just over a year, Microsoft released Hyper-V with Windows 2008 and then updated Hyper-V with Windows 2008 R2 to include not only what VMware has in their VI3 and their newly released vSphere 4, but Microsoft now gives it all away out of the box in Windows 2008 R2.  If money matters to you, what organizations used to buy ESX, V/Motion, and now VSphere for, you get it all in Windows 2008 R2.  In side by side comparisons, server virtualization is server virtualization whether it’s VMware or Hyper-V R2, you can run guest sessions (lots of them on a 32gb 8core server (12-15 easily)), you can take snapshots so before you patch or update a guest session, just take a snapshot and if the update screws up your application, just rollback to the snapshot.  If you have a problem with any Microsoft product being virtualized (Exchange 2010, SharePoint 2007, System Center, etc) it’s the same company / same support call to get Exchange support as it is Hyper-V virtualization support.  And with Hyper-V R2, you can now “cluster” Hyper-V host servers and move guest sessions across Hyper-V hosts, AND not only move them around for disaster recovery, you can do them LIVE in the middle of the day without dropping a client session state in what Microsoft called “Live Migration” (and VMware calls V/Motion that you pay lots of extra $$$ for).  You just right click a guest session and “Live Migrate” the guest session to another server in the middle of the day to evacuate a problemsome host, or maybe do load balancing of hosts, or as part of your process to move images so you can patch and update a host.  End of the day, VMware is far from 100% of our virtualization business and actually in just a couple years is now at par with Microsoft and now Microsoft Hyper-V makes up 50% of our virtualization business…

#1  Stability and Reliability:  Okay, not a specific feature or function, but my #1 on Windows 2008 R2 is that it just plain works!  This is the first operating system we’ve deployed well over a year in beta in full blown production environments without issues, and our experience since the product RTM’d has been just as solid.  No issues, no surprises, this one has been a keeper and because of that, Stability and Reliability of Windows Server 2008 R2 and Windows 7 puts this in my #1 spot!

Now is comes down to getting to know MORE about each of these Top 10 features and functions which over this month I will be blogging tips, tricks, best practices, and lessons learned on Windows Server 2008 R2 including insider comments and step by step “how to” guidance.  All of this content is also covered in detail in my book “Windows Server 2008 R2 Unleashed”, all 1550-pages of the book…

Friday, August 12, 2011

Windows 8 Tablet Reveal? 4 Things We Want to See

Windows 8 Tablet Reveal? 4 Things We Want to See


Live Tiles

The big, touchable squares that are a hallmark of Windows Phone 7 are rumored to play a role in Windows 8's tablet interface, which is great news if true.
They're the most distinctive part of Microsoft's mobile operating system, and they go beyond the functionality of simple apps. For instance, Windows Phone's "Mango" update will let users jump to specific functions within an app. This is already possible with some app shortcuts in Windows 7, so I wouldn't be surprised to see similar abilities built into Windows 8 tablet tiles.

Tablet-Centric Apps

If Microsoft shows a desktop version of Office running on a tablet, it'll be a pretty disappointing demo. I want to see apps that are optimized for the touch screen, while sharing data with their desktop counterparts. For that matter, I'd like to see how Microsoft will handle the divide between tablet and desktop apps in Windows 8. If you buy one app version, will you automatically have access to the other?
 



Cloud Sync
Rumors have hinted at a big role for online application and data syncing in Windows 8. My hope is that it's an integral part of how apps function on Windows 8 tablets. Imagine, for instance, having all your documents on the tablet automatically backed up online, so you can access them from any other Windows device without a manual transfer, whether it's a tablet or not.

Multiple Screen Sizes (or support thereof)

Right now, 10-inch tablets are all the rage, but 7-inch tablets have their own advantages -- they're easier to hold and type on, for example, and they're great for gaming. I hope Microsoft is planning to accommodate lots of screen sizes out of the gate, even if the company doesn't show multiple pieces of hardware next week.

Thursday, August 11, 2011

Chart: Microsoft's performance under Gates vs. Ballmer




Summary: Microsoft has been treading water for a decade, but here is a chart that provides a great visual of how the company’s value skyrocketed under Bill Gates and then flattened when Steve Ballmer took over.

Honestly, it’s surprising that Steve Ballmer hasn’t come under more fire during the past decade for Microsoft’s lack of innovation, dearth of new hit products, and a stock price that has continued to tread water.
However, those issues and Ballmer’s plan to keep milking Windows and Office rather than push forward and look for the next big advances in personal computing may finally be catching up with him.
The graphic below (created by Erik Pukinskis) charts the market value of Microsoft during the past two decades, comparing the CEO reigns of Bill Gates and Steve Ballmer. You have to be a little bit careful with this chart. It’s not completely to scale. Notice that the areas between 0 and 5 and 5 and 20 are much larger than the ones between 20 and 40 (and even this between 40 and 60). This gives a little bit of an exaggerated sense of how much Microsoft grew under Gates.
Nevertheless, it’s pretty amazing how all of the growth happened while Gates was CEO and then things stagnated as soon as Ballmer grabbed the reigns. Of course, it’s also important to keep in mind that the Gates-Ballmer hand-off coincided with Microsoft’s big antitrust case with the U.S. government. However, even after the dust settled from that, Microsoft has continued to struggle.

Microsoft’s Success: Bill Gates vs. Steve Ballmer

In the most recent print issue (June) of Fast Company, columnist Farhad Manjoo had an article on why Bill Gates needs to replace Steve Ballmer at Microsoft, where the general idea was that Ballmer is too polarizing and without the vision that guided the company to their current position of domination. While the idea for the return of Gates to Microsoft is great, Manjoo’s reasoning, however, is faulty.

Steve Ballmer is known for being a Microsoft fanatic and having a temper – both traits also famously held by Bill Gates, but apparently implemented better for Gates than Ballmer. Since Ballmer began his transition to Supreme Leader (CEO) of Microsoft, a number of things have gone wrong for the company: an almost complete failure to be relevant in the online world, the disaster known as Windows Vista, the various Windows-based devices that failed to garner any market share, and a number of smaller programs that made little-to-no difference to customers. That’s not to say that Microsoft hasn’t seen success under his leadership, because there’s also been the XBOX, XBOX 360, smartphones with Windows Mobile, and, most importantly, Windows 7.

The problem, though, is the time it takes Microsoft-by-Ballmer to come to the proper conclusions. XBOX and its iterative child were both successful immediately, but this is due in large part to its segregation from the rest of the company and the large freedom the development teams have – almost all other Microsoft successes take far too long or are far too painful to win customers over in the initial launch. Windows 7 is a perfect example: Windows XP was released in 2001, and generally loved, but it wasn’t until 2006 that Windows Vista launched, and was immediately hated.
It took Microsoft five years of both development and grand public statements to transition from XP to Vista, and the result was an operating system that was essentially XP with a visual refresh, some faux-security measures, and features no-one wanted. Most importantly, Vista was unbelievably slow, almost to the point of being unusable. Fast forward three years to 2009 where Windows 7 is released, and quickly became one of the most loved and respected Microsoft products in memory.
The key that makes Windows 7 great is its focus on performance, usability, and changes central to the way the operating system works, rather than layering new features and GUIs on top of an already shaky codebase. If Bill Gates had still been at the helm, it seems unlikely this misstep would have happened, or been executed so poorly, for one simple reason: Bill Gates is a programmer, and Steve Ballmer is not.
To people like Ballmer, code is confusing and sometimes scary, so visual details and tangible evidence become more important than what’s “under the hood” – the GUI refresh of Vista is, in all probability, a direct result of this syndrome. Ballmer may have the vision and the manhandling attitude, but he simply doesn’t have the technical knowledge to deeply question the technical aspects of products, as Gates was so famous for doing. Whether or not the current CEO understands his shortcomings is unclear, so it remains to be seen if Windows 7 was more of an accident than a moment of brilliance.
The simple truth is that even without Bill Gates, the man who essentially created the world of modern computing, Microsoft is regaining steam and is looking ever-more like it’s 1990’s self, where it dominated every facet of life: they’re once again hiring all the best programming talent they possibly can, rumors circulate about the new life breathed into the company and its employees, their R&D departments are hyperactive, and their clout is being thrown around with the confidence and swagger of a previous decade. Ballmer may be no Gates, but he’ll continue to navigate the company to a point of dominance as long as he asks himself “What would Bill do?” and doesn’t engage in the ultimately futile act of micromanaging programmers.
Bill Gates is gone from Microsoft, and probably for good, but it’s not a stretch to imagine how the last decade might have been different, possibly better, had he still been in full control. He picked Steve Ballmer for reasons of similarity and confidence, so the world now must look to Ballmer for monolithic Microsoft power and leave Gates to his valiant attempts to save the world.

Microsoft vs. Apple: The History Of Computing [Infographic]

It’s the epic battle of the century, or was rather. Now that Bill Gates is no longer there, the battle for the thrown is no longer on an epic scale if you ask me. Now when Apple surpassed Microsoft (if I am not misinformed), it’s like it’s all about Apple ALL the time. You know, the keynotes, the gadgets and god knows what. Whatever they do, they call it groundbreaking, sexy or innovative. They might have some pretty cool products, but it’s just a matter of time before someone in a garage somewhere will come out of their dungeon and release something that even Apple can’t rival. What that will be I have no idea.
However, to look back on the rivalry between Apple and Microsoft is quite an experience since there have been some controversies over the years. The guys behind Manolution put together a quite impressive infographic that will take you through the years of their rivalry. If you think it will be one of those small, sparse and boring infographics, you are totally wrong. This is what I would call an epic sized infographic.
It’s inspirational to see a company, in this case Apple, come back from an almost certain collapse and then be built up again to one of the world’s largest and most successful corporations. Looking at the market capital and comparing the two companies will show you exactly why Apple is being somewhat cocky about their position. Their share price has been sky rocketing for the last couple of years, and it is all thanks to the new gadgets they release. It’s mostly because of the iPhone and the MacPro computers, and of course, the iTunes store. And, possibly everything else they are doing…

History Of Computing Apple Microsoft