Cybercrime increasing on Facebook

There’s a piece by Jim Finkle in Reuters this morning about the rise in cybercrime in social networking sites.¬† The article mentions that MySpace had been plagued by this for several years, but now with the increasing popularity of Facebook, the criminals are going where the game is.

Per the article, “Facebook is the social network du jour. Attackers go where the people go. Always,” said Mary Landesman, a senior researcher at Web security company ScanSafe.


Scammers break into accounts posing as friends of users, sending spam that directs them to websites that steal personal information and spread viruses. Hackers tend to take control of infected PCs for identity theft, spamming and other mischief.

The Resilient Dynamic Data Center

I attended the 2009 IT Roadmap Conference in Atlanta, Georgia this week and sat in on the presentation by Johna Till Johnson of Nemertes Research, entitled “Building a Resilient Dynamic Data Center.” The presentation was a summary of hundreds of hours of discussions with data center operators and enterprises with data centers.

The information was summarized and presented in the framework of trends- from old to new. Of the data centers investigated, the ages were approximately 18 years old and the youngest around eight years old.

Beginning with the older sites, this vintage data center was built favoring reliability over responsiveness to change or business agility. The rate of growth within the data center was low, HVAC and power were relatively static, and there was little network infrastructure.

Uptime Institute Data Center Tier Classifications: Time for a Refresh?

Posted by Bob Landstrom

Enterprises have embraced the data center tier classification system developed by the Uptime Institute for evaluating their own data center facilities and those of potential colocation and hosting providers.

The subject of facility availability has matured over recent years.  The mind set has matured from recognition that existing IT facilities were a problem that needed to be improved to questions of how to improve.  How much should we improve them?  What do we improve?  How far can we grow?  Traditionally, these questions were answered without much guidance beyond the level of capital funds available for improvements.

The tier classification system developed by the Uptime Institute is an academic framework that can be used as guidance for determining the type of data center facility appropriate for the Business.  It’s a seminal body of work, and has become part of the daily lexicon of those working in the data center world.  We’ve written about it several times in this forum, covering what it is, where it came from, and what it’s not.  Indeed we’ve dedicated time in this blog talking about how the tier classifications are (very) frequently misused.

Container Data Centers: “Waiting for Godot” meets “The Price is Right”

Regular readers of this blog will know that we are strong advocates of container data centers as a step toward modular data center design and as a facility component for extremely dense data processing.¬† Earlier posts talked about 2008 as a breakout year for containers (it wasn’t) and containers used as facility components in cargo ship data centers (haven’t seen them either).

It seems that 2009 isn’t going to click for containers either.¬† We’ve seen reaffirmations from Microsoft and select others that containers are still a planned component in the construction of data centers for very high density data processing, but this too seems to be lagging.

…Where are the containers?¬† They’re coming.¬† When?¬† They’re coming.

Oracle and Sun?

Early news wires this morning report the acquisition of Sun Microsystems by Oracle at a $9.50 USD/share price. While many of us were watching the dance between IBM and Sun over the past few months, this deal with Oracle leaves me scratching my head. Oracle’s Larry Ellison says, “The acquisition of Sun transforms the IT…

Netbooks and The Cloud

Posted by Bob Landstrom

The attention and interest in netbooks has been on the sharp increase over the past six months. A recent post in IT Business Edge makes the claim that netbook sales can be credited with the resiliency the PC market has had through the recent recession.

When one looks for the reasons why netbooks are appealing, it’s easy to notice their conveniently small size (many will fit into a large jacket pocket or a purse), the full QWERTY keyboard (as opposed to the thumb typing PDA form factors), and ease of connectivity through now nearly ubiquitous free WIFI. They’re a blogger’s dream, since they can be easily carried (when compared to a conventional laptop PC) and area always connected… aligning well with the kind of spontaneous editorializing in which bloggers delight.¬† Let’s not forget too that they are affordably priced (under $400 at this writing).

Detractors have their voice too, pointing out that as a computing device netbooks are simply less capable, citing the inability to carry large stores of data and workhorse applications as well as their having much smaller screens.¬† While both the advocates and detractors are correct in their assessments of these gadgets, there is more to this than just another product form factor serving a user preference niche.¬† I suggest that what we’re looking at is the next evolutionary step in a new usage model for the holistic computing platform, and one closely linked with the migration of enterprise storage and applications into the cloud (I’m referring here to cloud computing, including any xAAS paradigm you prefer to imagine when thinking about cloud computing).

Data Center Tier Levels and Real Availability

Data Center consolidation and Data Center outsourcing are top of mind for many CIO’s these days. Many companies have ’90s vintage IT facilities that not only do not have the availability to align with the Business‚ operating model but also are struggling to keep up with the power and cooling demands of contemporary computing systems.
The deployment of multi-core processors and blade-based systems has pulled the rug out from beneath many a facility manager. The rapidly growing consumption and cost of energy due to the Data Center have caused many a CFO to define facility operational costs as an IT problem.

Whatever the motivation for the Data Center project, one will have to become familiar with the spirit of the TIA-942 tier classifications as well as the nuances thereof
to exercise the proper degree of due care necessary in planning these very expensive projects. In many cases, the enterprise may be focused on the minutia of TIA-942 because of the desire to align the Business with the proper availability of the IT facility, but this may take place at the expense of pragmatism.

Windows on a Mainframe? Oh My!

Microsoft Windows running on a mainframe? Could this be the greatest abomination since the Labradoodle (before you Labradoodle owners out there skewer me, I’m sorry but the Labrador Retriever is such a wonderful dog and to corrupt this fine breed with Poodle blood is a bane to me)? Anyway, this week’s Network World magazine carries…

Your Next Data Center

The past two years have brought an explosion of activity in the area of data center facilities. On the provider side, we can talk about services we expect from the Cloud.¬† The proliferation of XaaS, exponential growth of content for Web 2.0 services, and the interest in Cloud-based services by the Enterprise drives demand for storage and utility-style computing services.¬† Providers of these services are expanding their facilities’ footprints at an impressive rate.

For the enterprise, Business demand for IT resources has stressed the resources of enterprise IT facilities both above and below the white space.

At the turn of the century, it was common to plan IT facilities using the Watts per Square Foot (W/SF) model.  We saw data centers operating comfortably at 50 W/SF in those days, and if you had a generator backup you felt pretty good about availability.  Things have changed drastically.

We now talk about the tier-level rating of data centers to align facility availability with the operating model of the Business.¬† We focus much attention on the TIA-942 and Uptime Institute guidelines for redundancy and topology of MEP systems.¬† Instead of the W/SF planning metric we use kilowatt-per-rack (kW/rack) because it’s much more relevant to how we deploy technology today.¬† The density of silicon in the data processing footprint is constantly increasing, which is driving power and cooling demand of the facility.

In 2003, we saw power density levels already at 4kW/rack.  By 2005, when blade servers first began to be a common site in the data center, 9kW/rack was common.  Today as those systems are seen as more of a standard building block, 20-50kW/rack is becoming common.  An enterprise IT facility built before 2005 very likely cannot support this level of demand.

Still, the growth of the Business and evolution of data processing systems is outpacing the capacity of IT facilities, and something must be done in order to continue to scale the IT capabilities.