Energy Internet and eVehicles Overview

Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems and at the same time reduce our carbon footprint.

Linking renewable energy with high speed Internet using fiber to the home combined with autonomous eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. These new energy architectures will also significantly reduce our carbon footprint. For more details please see:

Using autonomous eVehicles for Renewable Energy Transportation and Distribution: http://goo.gl/bXO6x and http://goo.gl/UDz37

Free High Speed Internet to the Home or School Integrated with solar roof top: http://goo.gl/wGjVG

High level architecture of Internet Networks to survive Climate Change: https://goo.gl/24SiUP

Architecture and routing protocols for Energy Internet: http://goo.gl/niWy1g

How to use Green Bond Funds to underwrite costs of new network and energy infrastructure: https://goo.gl/74Bptd

Friday, December 21, 2007

Future Internet could reduce todays PSTN CO2 emissions by 40%


[The ITU has put out an excellent report called ICTs and Climate Change. Highly recommended reading and further support to my belief that the ICT industry can reduce its own emissions to zero but also enable other traditional carbon heavy sectors of society to reduce their carbon footprint through "bits and bandwidth for carbon" trading schemes such as free fiber to the home, free mobile telephony, and other free eProducts and eServices. Some excerpts --BSA]

http://www.itu.int/ITU-T/newslog/PermaLink,guid,9ba8aa93-e90d-4e9b-859c-b94b6d57c424.aspx


Information and Communication Technologies (ICTs) are undoubtedly part of the cause of global warming as witnessed, for instance, by the millions of computer screens that are left switched on overnight in offices around the world.

But ICTs can also be part of a solution. This Technology Watch briefing report looks at the potential role that ICTs play at different stages of the process, from contributing to global warming (section 1), to monitoring it (2), to mitigating its impact on the most vulnerable parts of the globe (3), to developing long term solutions, both directly in the ICT sector and in other sectors like energy, transport, buildings etc (4). The final sections look at what ITU-T is already doing in this field (5) strategic options (6), and the campaign for a climate-neutral UN (7).

A major focus of ITU’s work in recent years has been on Next-Generation Networks (NGN), which are expected by some commentators to reduce energy consumption by 40 per cent compared to today’s PSTN

The telecommunications industry is currently undergoing a major revolution as it migrates from today’s separate networks (for voice, mobile, data etc) to a single, unified IP-based next-generation network . The savings will be achieved in a number of ways: • A significant decrease in the number of switching centres required. For instance, BT’s 21st Century Network (21CN) will require only 100-120 metropolitan nodes compared with its current 3’000 locations; • More tolerant climatic range specifications for switching locations, which are raised from 35 degrees (between 5 and 40°C) to 50 degrees (between -5 and 45°C). As a result, the switching sites can be fresh-air cooled in most countries rather than requiring special air conditioning.



Wednesday, December 19, 2007

A carbon negative Internet - Freedom to Connect Conference

[I encourage all those who are interested in the issues of global warming and how the Internet call help mitigate against the greatest challenge of our lifetime to attend the upcoming Freedom to Connect Conference in Washington DC --BSA]

http://freedom-to-connect.net/

Announcing F2C: Freedom to Connect 2008!
March 31 & April 1, 2008, Washington, DC

The theme of F2C: Freedom to Connect 2008 is "The NetHeads Come to Washington."

This year there will be a second theme at F2C, "A Carbon-Negative Internet." We will devote at least one session, and perhaps a half day, to exploring the impacts of applications like user monitored edge-based control of energy usage, cloud routing of compute-intensive operations to geographical locations with renewable energy, peer-to-peer automobile traffic optimization, and the putative trade-off between physical presence and virtual presence.

Conventional wisdom is that NetHeads have sharply different interests than telephone companies and cable companies. This is mostly true, yet both need a robust, sustainable Internet. It is in the long-term interests of neither to kill the 'Net's success factors. Further, conventional wisdom is that NetHeads are represented by public advocacy groups like Free Press, Public Knowledge, and the New America Foundation and aligned with Internet companies like Google, Amazon, and eBay. Again this is directionally correct, but the diversity of the NetHead community ensures divergence on key issues.

Biology teaches that diversity is good. Most business practices teach the opposite. Washington hears much from the telcos and cablecos, and much from the Internet companies and the public advocacy groups, but way too little from the NetHeads themselves. F2C 2008 will provide a platform for NetHead voices and a forum for dialog among all parties with a stake in the future of an open, sustainable, state-of-the-art Internet.

So far (this is changing rapidly so check back here often) F2C speakers include:

* Tim Wu, Professor, Columbia Law School, Author of Wireless Carterphone (2007)
* Tom Evslin, founder ITXC, founder AT&T WorldNet, blogger, author, telecom activist
* Reed Hundt, former chairman of the FCC
* Andrew Rasiej, co-founder, Personal Democracy Forum
* Bill St. Arnaud, Chief Research Officer CANARIE and green-broadband blogger
* Brad Templeton, Chairman, Electronic Frontier Foundation
* Katrin Verclas, former Exec. Director NTEN, MobileActive blogger.
* Robin Chase, founder of ZipCar, entrepreneuse and environmentalist.

Tuesday, December 18, 2007

New undersea cable to Iceland to enable zero carbon data centres


[Hibernia Atlantic is planning to build a cable from Ireland to Iceland to attack the data centre opportunity that cheap geothermal and hydro Icelandic power presents. To my mind this is a classic example of the new business opportunities that are possible by first mover countries and companies who want to address the challenge of global warming. Newfoundland and Labrador in Canada is similarly well poised with their new undersea fiber networks to Nova Scotia and Greenland combined with the presences of renewable hydro electric energy at Churchill Falls. Newfoundland and Iceland could be the logical location for new zero carbon data centers for North America and Europe. Thanks to Rod Beck for this pointer --BSA]


http://www.hiberniaatlantic.com/documents/8607-IcelandPR-JSAFinal.pdf

HIBERNIA ATLANTIC WILL CONSTRUCT A NEW
SUBMARINE FIBER OPTIC CABLE CONNECTING ICELAND
DIRECTLY TO NORTH AMERICA AND EUROPE
THIS HISTORIC NETWORK BUILD MARKS ANOTHER “INDUSTRY FIRST”
FOR THE DIVERSE TRANS-ATLANTIC CABLE PROVIDER
BOSTON, MA & NEW YORK, NY – August 9, 2007

– Hibernia Atlantic, the only diverse
TransAtlantic submarine transport cable provider, today announces its plan to construct a brand new undersea fiber optic cable system connecting Iceland to its northern Atlantic submarine cable system. Hibernia Atlantic will deploy a branching unit off its existing northern cable, giving Iceland direct connectivity to North America, Ireland, London, Amsterdam and the rest of continental Europe. The new cable link will provide connectivity to Iceland at 192 X 10 Gbps Ethernet wavelengths, the only one of its kind in the region. This allows for communications traffic from Iceland to go either East or West, with direct access to 42 cities and 52 network Points of Presence (PoPs) and the ability to steer traffic around major metropolitan areas and bypass traditional backhaul routes. Hibernia Atlantic projects the system will become fully operational for customer traffic in the Fall of 2008.

“Many server-intensive customers who require reliable and inexpensive power for collocation services are looking to Iceland as their most cost-effective solution,” states Ken Peterson, Chairman of Hibernia Atlantic’s Board of Directors and the Chairman of Columbia Ventures Corporation, Hibernia’s parent company. “Iceland has an abundance of inexpensive geothermal and hydroelectric power that makes it attractive for many industries. The country is also committed to one day becoming entirely reliable on renewable energy sources, thereby making it an attractive and fertile place to do business.”

“Over a hundred years ago, Iceland marked a milestone in the history of its telecommunications,” continues Bjarni K. Thorvardarson, Hibernia Atlantic’s CEO and Icelandic native. “A submarine telegraph cable was laid from Scotland through the Faroe Islands to the East Coast of Iceland. That same year, a telegraph and telephone line was laid to the capital Reykjavik, thereby ending the country's isolation. Today, more than a century later, Hibernia is proud to announce its plans to build an upgraded submarine cable providing 10 Gbps Ethernet connectivity to Iceland, a major improvement on current capacity, and the addition of yet another key location in the growing list of Hibernia Atlantic operations and Points of Presence. We are pleased and excited to add this segment to our already healthy cable system.”

This new cable provides Iceland much needed diversity from its existing infrastructure. Currently, the only cable with available capacity is Farice, a submarine cable system connecting Iceland and the Faroe Islands to Scotland. Upon completion of the new Hibernia Atlantic cable, which will offer 192 X 10 Gbps wavelengths, Hibernia Atlantic will supply Iceland with a major upgrade in capacity, efficiency, reliability and first-to-market Ethernet services. Hibernia Atlantic will also serve as another redundant option to connect to North America, Ireland and other major European cities.

For the complete Hibernia Atlantic network map and service offerings, videocasts and the Hibernia Atlantic Blog, please visit www.hiberniaatlantic.com. If you have additional questions on network capacity, please email eric.gutshall@hiberniaatlantic.com.
# # #
About Hibernia Atlantic:
The Hibernia Atlantic is a privately held, US-owned, TransAtlantic submarine cable that provides “Security through Diversity” to European and US customers. Hibernia offers wholesale capacity prices, unparalleled support, flexibility and service while delivering customized solutions for its customers. Hibernia Atlantic’s redundant rings include access to Dublin, Manchester, London, Amsterdam, Brussels, Frankfurt, Paris, New York City, White Plains, Stamford, Newark, Ashburn, Boston, Albany, Halifax, Montreal and more. Hibernia provides dedicated Ethernet and optical level service up to GigE, 10G and LanPhy wavelengths and traditional sonet/SDH services. Hibernia Atlantic’s cutting-edge network technology allows enterprise customers, carriers and wholesale customers reliable, next-generation bundled services at affordable prices. For more information or a complete network map, please visit www.hiberniaatlantic.com. For Hibernia Atlantic media enquiries, please contact: Jaymie Scotto & Associates 866.695.3629 pr@jaymiescotto.com

Sunday, December 16, 2007

Cloud Routing, Cloud Computing, Global Warming and Cyber-Infrastructure

[To my mind "cloud computing" and "cloud routing" are technologies that will not only radically alter cyber-infrastructure but also enable the Internet and ICT community to address the serious challenges of global warming.

Cloud computing allows us to locate computing resources anywhere in the world. No longer does the computer (whether it is a PC or supercomputer) have to be collocated with a user or institution. With high bandwidth optical networks it is now possible to collocate cloud computing resources with renewable energy sites in remote locations.

Cloud routing will change the Internet in much the same way as cloud computing has changed computation and cyber-infrastructure. Today's Internet topologies are largely based on locating routers and switches with the shortest geographical reach to end users. But once again low cost high bandwidth optical networks allow us to distribute routing and forwarding to renewable energy sites at remote locations. In effect we are scaling up something that we routinely do today on the Internet with such concepts as remote peering and backhauling. By breaking up the Internet forwarding table into small blocks on /16 or finer boundaries we can also distribute the forwarding and routing load across a "cloud" of many thousands of PCs instead of specialized routers.

The other big attraction of "cloud" services, whether routing or computational is their high resiliency. This is essential if you want to collocate these services at remote renewable energy site around the world. Renewable energy sites, by their very nature are going to be far less reliable and stable. So "highly disruptive tolerant" routing and data replication services are essential

Some excerpts from postings on Gordon Cooks Arch-econ list--BSA]

For more information on cloud routing:

http://green-broadband.blogspot.com/2007/12/new-internet-architectures-to-re
duce.html

http://www.canarie.ca/canet4/library/recent/BELnet_Technical_presentation_De
c_11_2007.ppt
For more information on this item please visit my blog at
http://green-broadband.blogspot.com/ or http://billstarnaud.blogspot.com
-------------------------------------------

For more information on Next Generation Internet and reducing Global Warming http://green-broadband.blogspot.com



http://www.businessweek.com/magazine/toc/07_52/B4064magazine.htm

Google and the Wisdom of Clouds
A lofty new strategy aims to put incredible computing power in the hands of many by Stephen Baker

[...]
What is Google's cloud? It's a network made of hundreds of thousands, or by some estimates 1 million, cheap servers, each not much more powerful than the PCs we have in our homes. It stores staggering amounts of data, including numerous copies of the World Wide Web. This makes search faster, helping ferret out answers to billions of queries in a fraction of a second. Unlike many traditional supercomputers, Google's system never ages. When its individual pieces die, usually after about three years, engineers pluck them out and replace them with new, faster boxes. This means the cloud regenerates as it grows, almost like a living thing.

A move towards clouds signals a fundamental shift in how we handle information. At the most basic level, it's the computing equivalent of the evolution in electricity a century ago when farms and businesses shut down their own generators and bought power instead from efficient industrial utilities. Google executives had long envisioned and prepared for this change. Cloud computing, with Google's machinery at the very center, fit neatly into the company's grand vision, established a decade ago by founders Sergey Brin and Larry Page: "to organize the world's information and make it universally accessible

ONE-WAY STREET
For small companies and entrepreneurs, clouds mean opportunity-a leveling of the playing field in the most data-intensive forms of computing. To date, only a select group of cloud-wielding Internet giants has had the resources to scoop up huge masses of information and build businesses upon it.

This status quo is already starting to change. In the past year, Amazon has opened up its own networks of computers to paying customers, initiating new players, large and small, to cloud computing. Some users simply park their massive databases with Amazon. Others use Amazon's computers to mine data or create Web services. In November, Yahoo opened up a cluster of computers-a small cloud-for researchers at Carnegie Mellon University. And Microsoft
(MSFT) has deepened its ties to communities of scientific researchers by providing them access to its own server farms. As these clouds grow, says Frank Gens, senior analyst at market research firm IDC, "A whole new community of Web startups will have access to these machines. It's like they're planting Google seeds." Many such startups will emerge in science and medicine, as data-crunching laboratories searching for new materials and drugs set up shop in the clouds.

Many [scientists] were dying for cloud know how and computing power-especially for scientific research. In practically every field, scientists were grappling with vast piles of new data issuing from a host of sensors, analytic equipment, and ever-finer measuring tools. Patterns in these troves could point to new medicines and therapies, new forms of clean energy. They could help predict earthquakes. But most scientists lacked the machinery to store and sift through these digital El Dorados. "We're drowning in data," said Jeannette Wing, assistant director of the National Science Foundation.

All sorts of business models are sure to evolve. Google and its rivals could team up with customers, perhaps exchanging computing power for access to their data. They could recruit partners into their clouds for pet projects, such as the company's clean energy initiative, announced in November. With the electric bills at jumbo data centers running upwards of $20 million a year, according to industry analysts, it's only natural for Google to commit both brains and server capacity to the search for game-changing energy breakthroughs.

What will research clouds look like? Tony Hey, vice-president for external research at Microsoft, says they'll function as huge virtual laboratories, with a new generation of librarians-some of them human-"curating" troves of data, opening them to researchers with the right credentials. Authorized users, he says, will build new tools, haul in data, and share it with far-flung colleagues. In these new labs, he predicts, "you may win the Nobel prize by analyzing data assembled by someone else." Mark Dean, head of IBM's research operation in Almaden, Calif., says that the mixture of business and science will lead, in a few short years, to networks of clouds that will tax our imagination. "Compared to this," he says, "the Web is tiny. We'll be laughing at how small the Web is." And yet, if this "tiny" Web was big enough to spawn Google and its empire, there's no telling what opportunities could open up in the giant clouds.


================

December 13, 2007, 4:07PM EST text size: T T

Online Extra: The Two Flavors of Google
A battle could be shaping up between the two leading software platforms for cloud computing, one proprietary and the other open-source by Stephen Baker

Why are search engines so fast? They farm out the job to multiple processors. Each task is a team effort, some of them involving hundreds, or even thousands, of computers working in concert. As more businesses and researchers shift complex data operations to clusters of computers known as clouds, the software that orchestrates that teamwork becomes increasingly vital. The state of the art is Google's in-house computing platform, known as MapReduce. But Google (GOOG) is keeping that gem in-house. An open-source version of MapReduce known as Hadoop is shaping up to become the industry standard.

This means that the two leading software platforms for cloud computing could end up being two flavors of Google, one proprietary and the other-Hadoop-open source. And their battle for dominance could occur even within Google's own clouds. Here's why: MapReduce is so effective because it works exclusively inside Google, and it handles a limited menu of chores. Its versatility is a question. If Hadoop attracts a large community of developers, it could develop into a more versatile tool, handling a wide variety of work, from scientific data-crunching to consumer marketing analytics. And as it becomes a standard in university labs, young computer scientists will emerge into the job market with Hadoop skills.

Gaining Fans
The growth of Hadoop creates a tangle of relationships in the world of megacomputing. The core development team works inside Google's rival, Yahoo! (YHOO). This means that as Google and IBM ( IBM) put together software for their university cloud initiative, announced in October, they will work with a Google clone developed largely by a team at Yahoo. The tool is already gaining fans. Facebook uses Hadoop to analyze user behavior and the effectiveness of ads on the site, says Hadoop founder Doug Cutting, who now works at Yahoo.

In early November, for example, the tech team at The New York Times (NYT) rented computing power on Amazon's ( AMZN) cloud and used Hadoop to convert 11 million archived articles, dating back to 1851, to digital and searchable documents. They turned around in a single day a job that otherwise would have taken months.

[...]
========================

December 13, 2007, 5:00PM EST text size: T T

A Sea Change
Data from the deep like never before
Scientists knee-deep in data are longing for the storage capacity and power of cloud computing. University of Washington oceanographer John R. Delaney is one of many who are desperate to tap into it.

Delaney is putting together a $170 million project called Neptune, which could become the prototype for a new era of data-intensive research. Launching this year, Neptune deploys hundreds of miles of fiber-optic cable connected to thousands of sensors in the Pacific Ocean off the Washington coast. The sensors will stream back data on the behavior of the ocean: its temperature, light, life forms, the changing currents, chemistry, and the physics of motion. Microphones will record the sound track of the deep sea, from the songs of whales to the rumble of underwater volcanos.

Neptune will provide researchers with an orgy of information from the deep. It will extend humanity's eyes and ears-and many other senses-to the two-thirds of the planet we barely know. "We've lived on Planet Land for a long time," says Delaney, who works out of an office near Puget Sound. "This is a mission to Planet Ocean."

He describes the hidden planet as a vast matrix of relationships. Sharks, plankton, red tides, thermal vents spewing boiling water-they're all connected to each other, he says. And if scientists can untangle these ties, they can start to predict how certain changes within the ocean will affect the weather, crops, and life on earth. Later this century, he ventures, we'll have a mathematical model of the world's oceans, and will be able to "manage" them. "We manage Central Park now, and the National Forests," he says. "Why not the oceans?"

To turn Neptune's torrents of data into predictive intelligence, teams of scientists from many fields will have to hunt for patterns and statistical correlations. The laboratory for this work, says Delaney, will be "gigantic disk farms that distribute it all over the planet, just like Google (GOOG)." In other words, Neptune, like other big science projects, needs a cloud. Delaney doesn't yet know on which cloud Neptune will land. Without leaving Seattle, he has Microsoft (MSFT) and Amazon ( AMZN), along with a Google-IBM
(IBM) venture at his own university.

What will the work on this cloud consist of? Picture scientists calling up comparisons from the data and then posing endless queries. In that sense, cloud science may feel a bit like a Google search.



========================

December 13, 2007, 5:00PM EST text size: T T

Online Extra: Google's Head in the Clouds
CEO Eric Schmidt talks about the powerful globe-spanning networks of computers known as clouds, and discovering the next big idea

Instead, think about Google as a star-studded collection of computer scientists who have access to a fabulous machine, a distributed network of data centers that behave as one. These globe-spanning networks of computers are known as "clouds." They represent a new species of global supercomputer, one that specializes in burrowing through mountains of random, unstructured data at lightning speed. Scientists are hungry for this kind of computing. Data-deluged businesses need it.

On cloud computing:

What [cloud computing] has come to mean now is a synonym for the return of the mainframe. It used to be that mainframes had all of the data. You had these relatively dumb terminals. In the PC period, the PC took over a lot of that functionality, which is great. We now have the return of the mainframe, and the mainframe is a set of computers. You never visit them, you never see them. But they're out there. They're in a cloud somewhere. They're in the sky, and they're always around. That's roughly the metaphor.

On Google's place in cloud computing:

Google is a cloud computing server, and in fact we are spending billions of dollars-this is public information-to build data centers, which are in one sense a return to the mainframe. In another sense, they're one large supercomputer. And in another sense, they are the cloud itself.

So Google aspires to be a large portion of the cloud, or a cloud that you would interact with every day. Why would Google want to do that? Well, because we're particularly good at high-speed data and data computation.

On Google's software edge:

Google is so fast because more than one computer is working on your query. It farms out your question, if you will, to on the order of 25 computers. It says, "You guys look over here for some answers, you guys look over here for some answers." And then the answers come back very quickly. It then organizes it to a single answer. You can't tell which computer gave you the answer.

On the size of cloud computing:

There's no limit. The reason Google is investing so much in very-high-speed data is because we see this explosion, essentially digital data multimedia explosion, as infinitely larger than people are talking about today. Everything can be measured, sensed, tracked in real time.

On applications that run on a cloud:

Let's look at Google Earth. You can think of the cloud and the servers that provide Google Earth as a platform for applications. The term we use is location-based services. Here's a simple example. Everyone here has cell phones with GPS and a camera. Imagine if all of a sudden there were a mobile phone which took picture after picture after picture, and posted it to Google Earth about what's going on in the world. Now is that interesting, or will it produce enormous amounts of noise? My guess is that it'll be a lot of noise.

So then we'll have to design algorithms that will sort through to find the things that are interesting or special, which is yet another need for cloud computing. One of the problems is you have these large collections coming in, and they have relatively high noise to value. In our world, it's a search problem.

On Google becoming a giant of computing:

This is our goal. We're doing it because the applications actually need these services. A typical example is that you're a Gmail user. Most people's attachments are megabytes long, because they're attaching everything plus the kitchen sink, and they're using Gmail for transporting random bags of bits. That's the problem of scale. But from a Google perspective, it provides significant barriers to entry against our competitors, except for the very well-funded ones.

I like to think of [the data centers] as cyclotrons. There are only a few cyclotrons in physics and every one of them is important, because if you're a top flight physicist you need to be at the lab where that cyclotron is being run because that's where history's going to be made, that's where the inventions are going to come from. So my idea is that if you think of these as supercomputers that happen to be assembled from smaller computers, we have the most attractive supercomputers, from a science perspective, for people to come work on.

On the Google-IBM education project:

Universities were having trouble participating in this phenomenon [cloud computing] because they couldn't afford the billions of dollars it takes to build these enormous facilities. So [Christophe Bisciglia] figured out a way to get a smaller version of what we're doing into the curriculum, which is clearly positive from our perspective, because it gets the concepts out. But it also whets the appetite for people to say, "Hey, I want 10,000 computers," as opposed to 100.



Wednesday, December 12, 2007

High Speed Internet Help Cools the Planet

[Lightreading has been carrying a very useful blog on the Future of the Internet. Your faithful correspondent has been making some contributions in regards on how the Internet and ICT in general can contribute in reducing CO2 emissions. This can be done in 3 ways:

(a) The Internet and ICT industry has the tools today to reduce to its own global carbon emissions to absolute zero by collocating routers and servers with renewable energy sites and using advanced data replication and re-routing techniques across optical networks. If the ICT industry alone produces 10% of the global carbon emissions this alone can have significant impact

(b) Developing societal applications that promote use of the Internet as an alternate to carbon generating activities such as tele-commuting, distance learning, etc as outlined below

(c) Deploying "bits and bandwidth for carbon" trading programs as an alternate strategy to carbon taxes, cap and trade and/or carbon offsets as for example in the green broadand initiative - http://green-broadband. Blogspot.com

Thanks to Mr Roques in posting on Lightreading for this pointer--BSA]



Lightreading: The future of the Internet and Global Warming

http://www.internetevolution.com/messages.asp?piddl_msgthreadid=178018&piddl_msgid=151707#msg_151707



Study: High-speed Internet helps cool the planet http://www.news.com/8301-11128_3-9832021-54.html


Tempted to obsess over how another personal habit helps or hurts the Earth? Keep surfing with cable or DSL and you might save carbons in the process, according to the American Consumer Institute.

The world would be spared 1 billion tons of greenhouse gases within a decade if broadband Internet access were pervasive, the group's report (PDF) concluded in October.

Broadband is available to 95 percent of U.S. households but active in only half of them, the study said, noting that near-universal adoption of high-speed Internet would cut the equivalent of 11 percent of oil imports to the United States each year.

How would faster downloads and Web page loads curb the annual flow of globe-warming gases, and by how much? According to the report:

Telecommuting, a "zero emission" practice, eliminates office space and car commutes: 588 million tons.
E-commerce cuts the need for warehouses and long-distance shipping: 206 million tons.
Widespread teleconferencing could bring one-tenth of all flights to a halt: 200 million tons.
Downloading music, movies, newspapers, and books saves packaging, paper, and shipping: 67 million tons.

The Department of Energy estimates that the nation's emissions of carbon dioxide alone total 8 billion tons each year.

A study released and funded by a major Australian telecom company in October also suggested that broader use of broadband could cut that country's carbons by 5 percent by 2015.

All it would take is for more people to use software to monitor shipping schedules, cut the flow of power to dormant gadgets and so forth, the study said.

[...]

Monday, December 3, 2007

The Inefficient Truth - ICT carbon emission to surpass Aviation Industry


http://www.globalactionplan.org.uk/event_detail.aspx?eid=ef0cecc6-2621-4a3c-
962c-e4758b8952f8


The 'Inefficient Truth'

Inefficient ICT Sector's Carbon Emissions set to Surpass Aviation Industry

An Inefficient Truth is the first research report produced by Global Action Plan on behalf of the Environmental IT Leadership Team. The Leadership Team is a unique gathering of major ICT users from a range of different sectors who are committed to taking practical action to cut carbon dioxide emissions.

The report contains four sections.

1. The first section assesses the environmental impact of the ICT sector which is virtually the equivalent of the aviation industry.
2. Section two analyses survey results from major ICT users and discovers how quickly and effectively the sector is responding to the environmental agenda.
3. The third section takes a snapshot look at some case studies illustrating how companies are implementing practical solutions that are reducing carbon emissions and saving them money.
4. Finally, there is a Call to Action from Global Action Plan setting out some of the challenges facing Government, vendors and users in order to move the sector towards a lower carbon future.

An Inefficient Truth is the first part of a longer journey which will see Global Action Plan using its position as an independent practical environmental charity to help cut carbon emissions from the ICT sector.

The environmental charity Global Action Plan today calls on the UK government to introduce legislation and tax incentives to support the adoption of sustainable ICT policies and strategy in British businesses.

The report includes a national survey that is the first to measure awareness between the use of ICT in business and its contribution to the UK's carbon footprint; identify the proportion of companies seeking energy efficient strategies; and to promote examples of best practice.

Key findings in the report include:

* 61% of UK data centres only have the capacity for two years of growth.
* 37% of companies are storing data indefinitely due to government policy.
* Nearly 40% of servers are underutilised by more than 50%.
* 80% of respondents do not believe their company's data policies are environmentally sustainable.

Trewin Restorick, director of Global Action Plan and chair of the EILT, comments, "ICT equipment currently accounts for 3-4% of the world's carbon emissions, and 10% of the UK's energy bill. The average server, for example, has roughly the same annual carbon footprint as an SUV doing 15 miles-per-gallon! With a carbon footprint now equal to the aviation industry, ICT, and how businesses utilise ICT, will increasingly come under the spotlight as governments seek to achieve carbon-cutting commitments."

The survey, which was completed by CIOs, IT directors and senior decision makers from 120 UK enterprises, found that over 60% of respondents consider time pressures and cost the biggest barriers to adopting sustainable ICT policies, and believe that recognised standards and tax allowances would provide the most valuable support towards reducing ICT's contribution to the UK's carbon emissions.

Restorick adds, "The survey illustrates that ICT departments have been slow off the mark to address their carbon footprint. Awareness is now growing but to turn this into action, ICT departments need help. They need vendors to give them better information rather than selling green froth, they need Government policies to become more supportive and less contradictory, and they need more support from within their organisations."

Logicalis, international ICT provider and sponsors of 'An Inefficient Truth', agrees that legislation and tax incentives are important, but, first and foremost, businesses must evaluate the efficiency of existing ICT infrastructure, citing server under-utilisation and the data centre as prime examples of energy abuse. Tom Kelly, managing director for Logicalis UK,
comments:

"The government's draft climate change bill proposes a 60% cut in emissions by 2050. In this environment, a flabby business that guzzles budget and energy is likely to be a prime target for impending legislation.

"CIOs have a responsibility to ensure their ICT infrastructure can support a lean and dynamic business, yet as this survey demonstrates, many ICT departments are unsure if and how they can maximise their existing assets. With data centre capacity at a premium, and energy bills escalating, CIOs are well advised to look inward for energy saving initiatives and to instigate cultural change throughout the business. In short, efficient IT equals green IT."

As a result of the survey Global Action Plan is calling on ICT vendors and the government to provide businesses with the support and tools to implement ICT best practice. These demands include:

* Government to provide incentives to help companies reduce the carbon footprint of their IT activities
* Government to ensure that there is a sufficient supply of energy for data centre needs in the future
* Government to review its policies on long-term data storage to take into account the carbon implications
* ICT vendors to significantly improve the quality of their environmental information
* ICT departments to be accountable for the energy costs of running and cooling ICT equipment
* Companies to ensure ICT departments are fully engaged in their CSR and environmental policies
* Companies to ensure that their ICT infrastructure meets stricter efficiency targets

Gary Hird, Technical Strategy Manager for John Lewis Partnership and member of the EILT comments: "Green Computing is an opportunity for us all to clearly demonstrate IT's value in helping our companies tackle an urgent, and global, issue. It is vital that we do a good job collectively and that means being open about the specific problems we're facing and the solutions we're pursuing. The Global Action Plan survey provides a 'current state' understanding of companies' green IT initiatives and the obstacles we must overcome to help them succeed."

Carbon dixoide emissions from ICT industry equal that of aviation industry


[Here is a fascinating news clip from Sky news that puts the carbon emissions of ICT industry in perspective. They claim that carbon emissions from global ICT community equal that of the worldwide aviation industry and are growing much faster. One small computer server generates as much carbon dioxide as a SUV with a fuel efficiency of 15 miles per gallon. The ICT industry in the UK consumes the equivalent amount of electricity as produced by 4 nuclear reactors. The aviation industry is already going to great lengths to mitigate its carbon footprint, but to date very little comparable developments are being undertaken by the ICT industry. And yet the ICT industry in my opinion is in the best position of any sector in society to reduce its carbon footprint to nearly zero and beyond.

Thanks to Conal Henry for this posting on Gordon Cooks Arch-econ list --BSA]

http://news.sky.com/skynews/video/videoplayer/0,,31200-1295311,.html

New Internet architectures to reduce carbon emissions

[This is another posting as part of my own evolving thought processes how the Internet, and in particular research and education networks can help reduce carbon dioxide emissions, firstly by re-engineering the network and secondly by deploying applications and services that will encourage others to use the Internet in novel ways in order to minimize their own carbon footprint.

First of all I would like to thank all those people who sent me e-mails with additional suggestions, comments and ideas on how ICT technologies, in particular the Internet and broadband can be used to mitigate the impact of global warming. Given the large number of e-mails I have received on the subject I apologize if I have not been able to reply to some of you directly.

I want to assure you that none of my ideas, and those of others that have been posted here, are in any way cast in stone or anywhere close to deployment. Many of these ideas are come from my own fevered brain, and may likely never survive close scrutiny by experts or validation in the marketplace. The purpose of this e-mail and my blog is to hopefully stimulate some creative thinking in the Internet community and especially within R&E networks on ways we can collectively design "green" Internet solutions. This is a community that is used to rapid changes and has many of the most innovative people in business or academia. Hopefully my blog, in some small way, will stimulate others in developing more robust and scalable solutions that help address, what in my opinion, is the biggest challenge of this generation and of this decade - global warming.


In today's modern Internet networks one of the biggest energy sinks, and consequently a significant producer of carbon emission due to their electrical and cooling requirement, is the Internet core routers.

Internet routers are custom designed pieces of computing equipment which must operate at very high speeds in order to do fast lookups in the forwarding table in order to process packets at line speeds. The need to do fast lookups is further compounded by the continued growth of routing tables over the past few years.

In order to handle the processing of packets at wire line speeds modern routers usually have multiple ASICs on the forwarding card. Each ASIC handles only a subset of the forwarding address table, which is split up between the various ASICs on /8, /16 (or finer grained) address boundaries.

But an alternate routing architecture approach to big core routers with multiple ASICs is to deploy networks of multiple virtual routers, with each network of virtual routers assigned an address block. All virtual routers for a given address block linked together by a dedicated lightpath network independent of parallel virtual routers and networks for other address blocks.

Each address range or block would have a global set of virtual routers dedicated to forwarding and routing with that address block. And optical connections between the virtual routers can be traffic engineered to optimize flows for that address block. As well separate OSPF (or ISIS) networks can be deployed for each address block. At inter-domain boundaries these separate address block networks can be aggregated into a single connection to a neighbouring AS, or arrangements can be made to advertise separate BGP networks with parallel ASs for each address block network.

At first blush this seems to be an incredible waste of resources. Not only would separate routing tables and networks would have to be maintained, but multiple copies of filtering policies etc would have to be deployed for each network address block.

However by breaking up the forwarding table into multiple (roughly) parallel forwarding networks, where each network is assigned a specific address block allows us to deploy much more inexpensive commoditized routers using off the shelf open source routing engines like Vyatta.

Because these routers don’t have to do lookups on the entire forwarding table they can be built with more inexpensive commodity components. In effect we are trading off large forwarding tables using ASICs against commodity virtual routers with multiple parallel optical networks for each address block.

More importantly these low cost (and low energy, hence low carbon emission) devices can now be collocated nearby renewable energy sites. Not all such sites need to have to support all virtual routers to carry the entire routing table. Instead address block networks can be engineered with different topologies linking together independent renewable energy sites supporting alternate nodes for the various address block networks.

Because we have also broken down the Internet into many (roughly) parallel networks aligned along each address block, outages and re-routing can be more easily handled, especially as the routing nodes are located at renewable energy sites such as windmills and solar power farms.

Users would be backhauled to with dedicated optical links to two or more virtual router renewable energy sites. The assumption is that an all optical backhaul network has much lower carbon emissions than an energy consuming electronic local router or stat-mux switch.

This architecture would be ideal for R&E networks as generally they have a very small number of directly connected organizations such as universities and research centers. These organizations can even pre-classify their outgoing packets along the address block boundaries and send them out separate parallel optical channels to the nearest renewable energy site(s) supporting the multiple virtual routers for each address block.

Companies like Google are also well positioned to take advantage of this architecture as they have a world wide distributed network of low cost servers and they are rumored to be deploying costumed developed 10Gbe switches on their own private optical network. The same principles that Google used for their network of search engines could be applied to a virtual routed network as described here.

Optical networks are much better suited for this application as opposed to MPLS and PBT networks which require electronic devices to do the forwarding and label switching. Optical networks can be significantly more energy efficient than electronic networks, but unquestionably far less efficient in terms of multiplexing packets. Tools like Inocybe's Argia can be used to do the traffic engineering of the various optical paths assigned to each address block.

For more information on this architectural concepts please see my blog or presentations at http://green-broadband.blogspot.com


Friday, November 30, 2007

Replacing electrical transmission lines with optical networks

One of the challenges of delivering renewable energy such as wind power or solar systems is the high cost of the electrical transmission lines to carry the power to where it is needed. Unfortunately ideal solar and wind power sites are rarely located near major urban centers. Most renewable energy systems produce relatively small amounts of power compared to a coal and nuclear power plants, and as consequence the cost of the electrical transmission line given the distances to reach renewable energy sites can completely undermine the business case for deploying a renewable energy system in the first place.

But maybe there is another solution of rather than building expensive electrical transmission lines to link these remote renewable energy sites to the electrical grid we instead move our cyber-infrastructure servers, storage and other facilities to the renewable energy sites themselves and link them with optical networks to the global information grid - the Internet.

One of the fastest growing energy consuming sectors is information communication technologies (ICT). It is estimated that ICT consumer upwards of 9% of all the energy output in North America through direct electrical consumption and cooling. Cyber-infrastructure facilities, corporate server farms, etc are major sinks for electrical power and cooling and are putting enormous strains on the electrical systems of our cities, universities and businesses.

With today's modern telecommunication facilities, there is no reason why these cyber-infrastructure facilities and server farms need to be located in close proximity with their users. High speed optical networks allow these facilities to be located anywhere. In fact many large corporations like Google, Microsoft, Amazon and others are already starting to collocate their server farms to low cost energy sites around the world.

The obvious next step in this evolution is to collocate cyber-infrastructure equipment and servers directly to the renewable energy sites themselves. And rather than building expensive electrical transmission systems to connect these renewable energy sites to the electrical grid, we instead build much cheaper optical networks to the servers to interconnect them to the global information grid - the Internet.

One downside of this approach, is that these cyber-infrastructure facilities and servers will not be connected to any electrical grid, and as result they will experience a lot more outages and down time dues to the waxing and waning of the wind or the diurnal cycle of the sun. But the beauty of ICT is that we already have the technology to do rapid load balancing of servers due to outages, and of course, the Internet from day one has been designed to route around outages.

We have the technology at hand to build "follow the wind" or "follow the sun" computing grids using optical networks to ensure extreme high reliability information systems and computing grids regardless of whether or not components of the underlying physical computational network and/or storage facilities are available and on line. The mesh of global optical networks around the world will further help provide load balancing due to varying wind and solar conditions.

Ben Bacque of Alcatel-Lucent has even suggested that we locate these renewable energy/server farms in Canada's remote artic regions because this would also help address the cooling challenges of todays modern servers. Up to now it has been impractical to locate renewable systems in Canada's high north because of the high cost of building transmission lines over immense distances across inhospitable terrain.

Building optical networks to remote renewable energy systems will also allow governments to achieve an important social objective of delivering high speed Internet to remote and rural communities and would provide much needed jobs for the maintenance and care of these server farms and renewable energy systems.

Optical networks can be also used to interconnect micro-power systems that provide power to peer to peer storage and computing grids. As with renewable energy systems, the existing electrical grid is ill suited for connecting hundreds, if not thousands of small micro electrical power systems located at our homes and businesses. The interconnection to the grid requires costly and expensive switches and meters that must be installed by professional electricians, and the distribution system must be re-configured to handle power origination from those who were traditionally consumers of electricity.

So rather than connecting the micro-power systems to the electrical grid we can perhaps use them to power locally hosted servers and storage facilities. And as before these servers and storage facilities can be interconnected via a well proven peer to peer grid over the Internet.

Robin Chase sent me an interesting pointer to a talk given by John Holdren (Director of Woods Hole Research Center) to the UN in September 2007, John Holdren's slides had a stunning number: If worldwide CO2 emissions peak in 2015 – that’s seven years from now – we have a 50 percent chance of avoiding catastrophic effects of climate change.

http://networkmusings.blogspot.com/2007/10/closing-climate-change-window-of.html

Most scientists already think we are at a tipping point in terms of CO2 emissions.

To my mind the ICT industry and research community as a whole has a moral responsibility to help address this problem. If the ICT industry and research community consumes 9% of the global energy budget, we can safely assume that ICT contributes to 9% of the worlds global emissions of carbon dioxide. But as opposed to any other sector in society ICT community has the means and tools to virtually eliminate this entire carbon footprint (and possibly more) through the many techniques I have outlined in this blog and previous postings. A 9% drop in carbon emissions over the next decade would dramatically mitigate the threat of global warming.

We need a call to action by the ICT industry and research community. We need to start immediately testing and experimenting with these ideas and many more that I am sure will be thought of in the process of identifying possible solutions. We need to immediately freeze the carbon budgets of our universities, research centers and server farms. Universities and research centers are the institutions that should be demonstrating global leadership and developing new solutions to address global warming.

New revenue opportunities for R&E networks and cyber-infrastructure

One of the growing challenges for many campuses around the world is how to accommodate the power and cooling requirements of cyber-infrastructure facilities such as high performance computers, storage facilities, etc.

Increasingly the costs of the bricks and mortar, power and cooling to house these facilities significantly outweigh costs of the actual cyber-infrastructure equipment.

In Canada, for example, many universities who are part of the HPC Consortium called Compute Canada will have to make significant investments in the coming year, to install and upgrade power and air conditioning systems to host a range of new computation facilities funded by CFI at various institutions.

The carbon emission impact has yet to be even taken into consideration in many any of these plans. The carbon footprint of a modern HPC facility can easily exceed the average use of several SUVs.

Global warming in not only a problem to be solved by politicians. It is a global issue in which we all have a personal responsibility to address regardless if we are an average joe citizen or world leading computer scientist.

Researchers and funding bodies need to take into consideration the carbon dioxide emission impact of all these cyber-infrastructure facilities. Building the fastest and best supercomputer regardless of its environmental impact is simply not an option any more. Universities and computing science researchers should be playing a leading role in identifying new cyber-infrastructure solutions which not only address their research requirements but also take into account the carbon emission impact of these facilities. Perhaps deploying energy efficient grids, sharing under-used computational facilities, or utilizing virtual computing is a better answer than building a physical cyber-infrastructure facility at every campus.

We also need to address the ongoing proliferation of computer clusters throughout various computer departments. Unfortunately most of these departments do not pay for the power and cooling costs associated with these facilities and so do not appreciate their true impact on the overall energy use of the university or the associated carbon emissions. As I mentioned on this blog before using Amazon's EC2/S3 service in many cases can be cheaper than the power costs alone of a modern computational cluster, never mind the operational and overhead costs of operating such a facility.

This is where regional and national research networks can play an important role. There are now many carbon offset companies who will audit programs that are designed to reduce carbon emissions. They will also broker payment of real dollars for the carbon reductions that result from the program. If an organization setups a tele-commuting program and demonstrate real and auditable reduction in carbon emissions they can earn revenue through the sale of carbon offsets to energy companies and other organizations. A good example is where IBM is working with a carbon offset company which is offering up to $1 million in carbon offsets for organizations to move away from their physical servers to high energy efficiency virtual servers operated by IBM.

R&E networks are ideally positioned to negotiate and implement these carbon offsetting solutions. Network organizations are essential for implementing any carbon offset strategy. As well the carbon impact of an optical R&E network is miniscule compared to the carbon footprint of many high performance computers and other facilities. The more we can use network bits and bandwidth for advanced science instead of physical facilities the greater the potential for earning valuable offset dollars (and I would argue the better the science community will be served).

Another potential carbon offset revenue opportunity is with distance learning and tele-medicine. Although the jury is still out on the pedagogical value of distance learning, encouraging students to undertake some of their course program work at home can be just as effective as tele-commuting in terms of earning carbon offset dollars. The same goes for tele-medicine. If companies can earn carbon offset dollars to implement tele-commuting programs, universities and R&E networks should be able to earn carbon offsets for offering distance learning and tele-medicine programs. (But as I argued in previous posts, rather than exchanging dollars in terms of carbon offsets, I would recommend exchanging other "zero carbon" awards such as offering participating students free eTextbooks, free music video, etc)

Finally R&E optical networks have an important role in redefining the entire value chain of the network itself. Many R&E networks are largely underutilized in terms of traditional measures of traffic volumes etc. Given these traffic volumes (and slowing growth) it would have been far cheaper in some cases for universities or funding agencies to purchase managed bandwidth from the carriers rather than build their own R&E networks.

But nobody yet has measured the carbon impact of these various optical, wavelength and customer owned networks. I would argue that in fact the carbon footprint of dark fiber, wavelengths and customer controlled network with optical switches is significantly less than a traditional carrier with expensive high end switches and (especially) routers which collectively consume the power of a small nuclear reactor. British Telecom for example has announced an initiative to use renewable energy sources as it is one of the biggest consumers of energy in the UK.

Instead of measuring the value of a network in terms of "bits per second", we instead should be using "bits per carbon". And while the utilization of R&E network may be low by traditional measurement standards of "bps" its impact on the environment may be significantly less when measured by "bpc" compared to a commercial network. And once again, the R&E networks can help develop a new business model through carbon offset trading by demonstrating that an optical lightpath mesh network has significantly less of a carbon footprint than a traditional electronic routed network.

An even more interesting and radical concept is to replace expensive electricity lines with optical networks. Instead of "wheeling" expensive power to physical servers at universities we can instead "wheel" inexpensive bits between virtual servers, grids located at renewable energy sites around the world.

For example the global community of optical research networks (GLIF) could build a "follow the sun" grid infrastructure. Solar powered high performance computing facilities could be located at remote desert locations throughout the world. But these systems would not be connected to any electrical grid, and instead be linked by a global high speed optical network. As the sun starts to set on any given HPC site, the currently running jobs and OS images would immediately transferred over the optical network to the next HPC site that is just starting to come active with the rising sun.

Bottom line is that I believe research and education networks can play important leadership role in defining these new business and network models related to trading "bits and bandwidth for carbon". They could also be working with universities to freeze, if not decrease, the carbon output of these institutions. To my mind universities should at the forefront in our society in finding solutions and new business models to address global warming. At least they should not be the worst offenders in terms of all these high carbon emission cyber-infrastructure facilities that are now being deployed at our campuses.

Tuesday, November 27, 2007

The next big eCommerce opportunity for Google, Amazon, eBay - carbon trading


[Google, Amazon and eBay are classic examples of the best of what America is good at- ingenuity and entrepreneurial capitalism. They dominate the global eCommerce marketplace.

Although the ecommerce economy has grown leaps and bounds over the past decade the eCommerce activities of these companies is still a relatively small part of the overall economy.

While click advertising, etrading and selling merchandise over the Internet has done wonders for the bottom line of these companies in the past decade, these markets are now maturing. The entire global advertising market is still very small compared to other economic activities. As well large portions of society still do not use eBay or Amazon for a variety of reasons including security, cross border shipping issues and so on. It is unlikely that these companies will be able to continue their spectacular growth of the past decade without some fundamental new business paradigm shift. Mobile eCommerce may provide some incremental revenues but I think its contribution to the bottom line will be miniscule at best.

The big challenge for these companies and many others like them is to move to the next wave of eCommerce which I believe will be carbon trading in exchange for bits and bandwidth.

There is a growing consensus that global warming is one of the greatest threats facing humanity. Increasingly governments and citizens are becoming aware of the severity of this threat and are clamouring for solutions.

To date the most obvious approaches to mitigate against global warming is to impose carbon taxes or implement various forms of carbon trading such as cap and trade or carbon offsetting.

Carbon taxes however, even if revenue neutral, are going to meet with stiff political resistance. Rather than imposing taxes can we instead provide carbon "rewards" where consumers and businesses are rewarded for reducing their carbon footprint, rather than being penalized if they don't?

To date carbon trading has been associated with various government mandated cap and trade systems or unregulated carbon offset trading. In cap and trade systems large carbon emitters are allocated carbon emission targets and can only exceed these targets by purchasing carbon permits from organizations who produce far less carbon. In offset trading there are a number of independent companies that audit and trade carbon offsets of individuals and businesses for high carbon emission activities such as air travel offset against telecommuting and other energy saving practices.

However these markets are very immature and relatively small.

Instead of trading carbon emission for carbon reduction, perhaps a better scheme would be to trade bits and bandwidth which have an extremely small footprint against activities that have a heavy carbon footprint.

A couple of simple examples come to mind which have been mentioned before on this blog:

(a) Amazon could work with public transportation systems and offer free eBooks with its new Kindle eReader to people who buy public bus and subway passes. Amazon would get a small percentage of every bus pass to pay for its ebooks and consumers would have a new incentive in which to take the bus or subway. Even if consumers still drive their SUV to work they would be helping out by providing a new revenue source to public transportation

(b) Free broadband Internet could be offered to consumers who are willing to pay a carbon premium on their gas and/or electric bill. See http://green-broadband.blogspot.com

(c) University students could be awarded with free cell phone, music and or videos if they agree to pay a carbon premium on their parking passes.

Etc

To my mind the trading and exchange of bits and bandwidth for carbon represents an entire new eCommerce business model with significant revenue potentials. Companies that are first movers in this space will quickly dominate this new market.

Carbon credit trading does not need to be limited to simple bilateral transactions, but like money can it can create multiplier effects, where consumers of bits and bandwidth can purchase other products and services with their carbon credits.

For example, universities could offer voluntary programs where students pay a premium on anything that creates a carbon footprint such as parking fees or residence power consumption etc. In exchange the students would be granted free access to the music and film industry libraries.

The "bits for carbon" fee would encourage students to reduce use of their automobiles and/or reduce their energy consumption within their residences or other activities. The university could also undertake energy audits on the students activities to earn additional valuable carbon credits, in the same businesses now earn carbon credits for promoting tele-working, tele-presence etc

But instead of paying the record and music industry actual money for the designated authorized music and video services, they instead would be paid in equivalent value of carbon credits, or the university would only purchase "originating" credits that were produced by the music/video industry through their own carbon reduction activities. The music and motion picture could then possibly double down their money by instituting their own carbon reduction schemes and trade in these credits or they could sell them to a variety of carbon trading brokers.

As one start to think about these concepts it becomes apparent there could be a whole range of business opportunities in trading bits and bandwidth for carbon.

Tuesday, November 13, 2007

Free the Music at our universities and Save the Planet


[Once again the MPAA and the RIAA are up to their dirty tricks of trying to block downloading of music and videos to students at universities by attempting to restrict funding to universities unless the institutions agree to alternatives such as paying monthly subscription fees to the music and motion picture industry.

One possible solution is for the universities and Educause to call the MPAA and RIAA's bluff and take the high road by instituting "bits for carbon" trading programs at their respective institutions.

Universities could offer voluntary programs where students pay a premium on anything that creates a carbon footprint such as parking fees or residence power consumption etc. In exchange the students would be granted free access to the music and film industry libraries. As well, part of the "bits for carbon" fee would be used by the university, working in partnership with the national research and education networks to set up an extreme high speed bandwidth connection to distributed content servers to enable fast download of the RIAA and MPAA approved content.

The "bits for carbon" fee would encourage students to reduce use of their automobiles and/or reduce their energy consumption within their residences or other activities. The university could also undertake energy audits on the students activities to earn additional valuable carbon credits, in the same businesses now earn carbon credits for promoting tele-working, tele-presence etc

But instead of paying the RIAA and MPAA actual money for the designated authorized music and video services, they instead would be paid in equivalent value of carbon credits. The music and motion picture could then possibly double down their money by instituting their own carbon reduction schemes and trade in these credits or they could sell them to a variety of carbon trading brokers. -- BSA]





[Note: This item expands on the call to action from EDUCAUSE that I
posted earlier. DLH]

Democrats: Colleges must police copyright, or else

By Declan McCullagh >

Story last modified Fri Nov 09 18:19:33 PST 2007


New federal legislation says universities must agree to provide not
just deterrents but also "alternatives" to peer-to-peer piracy, such
as paying monthly subscription fees to the music industry for their
students, on penalty of losing all financial aid for their students. The U.S. House of Representatives bill (PDF), which was introduced
late Friday by top Democratic politicians, could give the movie and
music industries a new revenue stream by pressuring schools into
signing up for monthly subscription services such as Ruckus and
Napster. Ruckus is advertising-supported, and Napster charges a
monthly fee per student.

The Motion Picture Association of America (MPAA) applauded the
proposal, which is embedded in a 747-page spending and financial aid
bill. "We very much support the language in the bill, which requires
universities to provide evidence that they have a plan for
implementing a technology to address illegal file sharing," said
Angela Martinez, a spokeswoman for the MPAA.

According to the bill, if universities did not agree to test
"technology-based deterrents to prevent such illegal activity," all of
their students--even ones who don't own a computer--would lose federal
financial aid.

The prospect of losing a combined total of nearly $100 billion a year
in federal financial aid, coupled with the possibility of overzealous
copyright-bots limiting the sharing of legitimate content, has alarmed
university officials.

"Such an extraordinarily inappropriate and punitive outcome would
result in all students on that campus losing their federal financial
aid--including Pell grants and student loans that are essential to
their ability to attend college, advance their education, and acquire
the skills necessary to compete in the 21st-century economy," a letter
from university officials to Congress written on Wednesday said.
"Lower-income students, those most in need of federal financial aid,
would be harmed most under the entertainment industry's proposal."

The letter was signed by the chancellor of the University of Maryland
system, the president of Stanford University, the general counsel of
Yale University, and the president of Penn State.

They stress that the "higher education community recognizes the
seriousness of the problem of illegal peer-to-peer file sharing and
has long been committed to working with the entertainment industry to
find a workable solution to the problem." In addition, the letter says
that colleges and universities are responsible for "only a small
fraction of illegal file sharing."

The MPAA says the university presidents are overreacting. An MPAA
representative sent CNET News.com a list of campuses that have begun
filtering files transferred on their networks, including the
University of Florida (Red Lambda technology); the University of Utah
(network monitoring and Audible Magic); and Ohio's Wittenberg
University (Audible Magic).

For each school taking such steps, the MPAA says, copyright complaints
dramatically decreased, in some cases going from 50 a month to none.

The MPAA's Martinez did warn that the consequences of violating the
proposed rules would be stiff: "Because it is added to the current
reporting requirements that universities already have through the
Secretary of Education, it would have the same penalties for
noncompliance as any of the others requirements under current law."

Neither the Recording Industry Association of America nor the
Association of American Universities was available for comment on
Friday.

Monday, November 12, 2007

The Green Grid - the new imperative for grids and VOs

[At the end of the day the big driver for grids and creation of virtual
organizations may not necessarily be eScience or eResearch, but the need for
universities and businesses to reduce power consumption and earn carbon
credits in order to reduce their carbon footprint. IBM has already announced
a virtual computing program where universities and businesses can replace
their existing physical clusters with a much more efficient virtual machine,
while at the same time earning thousands of dollars in carbon credits. I
suspect in a very short time you will see many more companies like Google,
Amazon, Microsoft and others offer similar carbon credit initiatives using
their various "cloud" computing networks to replace a variety of campus
servers such as mail, web, etc etc.

National research funding agencies can play a significant leadership role by
including carbon footprint as one of the criteria in awarding funding to
groups requesting computation and storage facilities. Already it is
estimated that cost of equivalent computational power from services like
Amazon EC2/S2 is less than the power consumption alone of a HPC cluster at a
university.

But in addition to the energy savings and reduced carbon footprint, any
development that encourages the use of virtualized computation and networks
will enable a greater flowering of advanced new applications and services
built around SOA, Web 2.0 and mashups

-- BSA]



http://www.thegreengrid.org/about/overview

The Green Grid is a consortium of information technology companies and
professionals seeking to improve energy efficiency in data centers around
the globe. The Green Grid takes a broad-reaching approach to data center
efficiency focusing on data center "power pillars" that span the gamut of
technology, infrastructure and processes present in today's data center
environments. The consortium's working focus includes research, standards
writing, published studies and continuing education.

Comprised of an interactive body of members who share and improve current
best practices around data center efficiency, The Green Grid scope includes
collaboration with end users and government organizations worldwide to
ensure that each organizational goal is aligned with both developers and
users of data center technology. All interested parties are encouraged to
join and become active participants in the quest to improve overall data
center power efficiencies.

http://searchcio.techtarget.com/tip/0,289483,sid19_gci1281024,00.html?track=
NL-275&ad=612169&asrc=EM_NLT_2533481&uid=1062647


Most machines use 5% to 10% of available computing power. By utilizing
server capacity more efficiently through virtualization, companies can do
the same job with 50% to 60% of their existing server population. This
translates into major savings in hardware, electricity and cooling.

Virtualization enables IT managers to divide a single server, or multiple
servers, into separate environments, each of which can run a different
operating system and serve different applications. Virtual machine (VM)
"images" can be ported from one physical server to another. Central
administrative software can then balance processing loads and allocate
storage capacity on an as-needed basis, across multiple virtual machines and
physical servers. One or more VMs can take up the slack during a planned or
unplanned outage.


http://www.computerworld.com/action/article.do?command=viewArticleBasic&arti
cleId=9045278&intsrc=news_ts_head

IBM to let customers sell server energy savings on carbon markets
Another financial incentive for reducing power in data centers

November 01, 2007 (Computerworld) -- IBM will announce Friday a program that
will make it possible for its customers to document server energy savings --
and even trade them for cash, if they want, on emerging carbon markets.

How it works: If you take distributed systems -- for instance, x86 servers
-- and consolidate them on a mainframe, the move will result in an energy
savings. Those savings can be calculated based on reference data, a task
that will fall to Neuwing Energy Ventures, an independent firm verifying and
trading in energy efficiency certificates.

More specifically, IBM said its ongoing consolidation of 3,900 distributed
systems onto 33 mainframes will eventually save the company 119,000 megawatt
hours annually. One energy efficiency certificate is issued for each
megawatt hour saved per year.

In IBM's example, the certificates would have an estimated value of between
$300,000 and $1 million based on market conditions, said Rich Lechner, IBM's
vice president of IT optimization. The certificates can be issued for each
year of the life of the project.


IBM isn't alone in providing a financial incentive for energy efficiency.
Pacific Gas and Electric Co., for instance, is working with major utilities
to expand a program that pays a company between $150 and $300 per server
removed from service. The utility has been encouraging its customers to
adopt virtualization to increase server utilization.

Under IBM's program, a company could keep its energy certificates and use
them simply as proof of corporate responsibility. But other companies might
sell these certificates on one of the emerging carbon markets.

Saving the Planet at the Speed of Light

Saving the Planet at the Speed of Light

[Here is an excellent report commissioned by the EU on how ICT technolgies
can help reduce carbon dioxide emissions. “ICT’s carbon dioxide reduction
impact is 10 times more than its direct carbon dioxide reduction”. And we
have hardly started to look at making ICT technologies themselves more
energy efficient through use of web services, virtualization, grids, Web
2.0, NGI etc.

Research and education networks and university CIOs should play a critical
leadership role in experimenting and deploying new network and
cyber-infrastructures that minimize the carbon footprint of these
activities. They can also deploy various types of “bits for carbon”
(e-dematerialisation ) trading schemes such as providing free download
music, video, electronic textbooks, and campus wide advanced tele-presence
systems in exchange for carbon fees assessed on student parking,
researcher’s travel, and inefficient high energy consuming computer
systems, etc.

Many of these techniques and practices will also lead to exciting new
business opportunities. Countries that will be the first to deploy ICT
strategies to mitigate global warming will be the new economic powerhouses
of the future global economy. For example companies like Cisco are to be
applauded for taking initiative in this area with their $15m Connected
Urban Development strategy to help cities deploy ICT solutions to reduce
CO2 emissions.

Some excerpts from the report – BSA]

www.etno.be/Portals/34/ETNO%20Documents/Sustainability/Climate%20Change%20Road%20Map.pdf


One of the world’s most pressing challenges is climate change: the need to
radically reduce greenhouse gas emissions, while continuing to enable
economic development, both in the European Union and worldwide is a
combination that requires innovative action.

The EU has affirmed that at least a 15-30% cut in greenhouse gas emissions
by 2020 will be needed to keep the temperature increase under 2 °C, and a
deeper reduction by 60-80% may be needed by 2050.

To achieve these reductions it will be necessary to go beyond incremental
improvements in energy efficiency, current life-styles and business
practices. Improved energy efficiency for existing lifestyles, cars and
domestic appliances may be enough to reach the initial Kyoto targets in
2012, but they will not be enough for deeper reductions. To achieve
dramatic reductions of CO2 additional structural changes in
infrastructure, lifestyles and business practice are necessary.

As demonstrated in this document, there is a potential to allow the ICT
sector to provide leadership. This is a sector that is used to rapid
changes and has many of the most innovative people in the business sector,
and a unique service focus: it can become an important part of the
solutions needed to combat climate change.

The strategic use of ICT can contribute significantly to energy
efficiency, sustainable economic growth as well as job creation. ICT can
reduce the need of travel and transportation of goods by bridging distance
problems. It can increase efficiency and innovation by allowing people to
work in more flexible ways. It can also ensure a shift from products to
services and allow for dematerialization of the economy.


bill.st.arnaud at canarie.ca
http://green-broadband.blogspot.com

Future of the Internet & Cyber-Infrastructure – Reduce Global Warming

Future of the Internet & Cyber-Infrastructure – Reducing Global Warming

One of , if not, the greatest threats to mankind an our planet is global
warming. Around the world there is growing recognition that an
international “call to arms” is necessary if want to minimize economic
dislocation and suffering of unimaginable proportions due to global
warming.

At the same time as we wrestle with the challenges of global warming the
ICT research community is doing some serious soul searching on the Future
of the Internet and the future evolution of Cyber-Infrastructure (SOA,
Web 2.0, Grids etc). To date most of the discussion has been about
technology issues of IPv4 versus IPv6, NGI versus NGN, network neutrality,
Semantic web versus web services and so on

But I would argue that future of the Internet and Cyber-infrastructure
should be less about such technology debates, but more how we can use the
Future Internet and Cyber-Infrastructure to reduce global warming.

There are various estimates that ICT hardware in terms of computers,
routers and switches consumes upwards of 9% of the energy production in
North America. The first challenge for the ICT research community should
be, at least, to reduce this carbon footprint.

Fortunately there is a promising new concept of virtualization that may
considerably reduce the power consumption of ICT equipment. Researchers
and equipment vendors are now talking about building virtual computers,
networks, routers and switches as a key architectural feature of the
Future Internet and Cyber-Infrastructure. Initiatives such as NSF
CYBER-INFRASTRUCTURE, GENI, 4WARD, FREDERICA, MANTICORE and UCLP are all
based around the concept of representing physical resources such as
computers, networks and routers as independent virtual resources.

Large, centralized and extreme high efficiency ICT equipment using
renewable sources of energy such as wind and solar power may be the future
physical architecture of the Internet and Cyber-infrastructure. But no
one wants to go back to the bad old days of large centralized mainframes
and carrier networks. Virtualization allows multiple independently
managed network and virtual organizations to exist on a common very high
energy efficiency network substrate and computational fabric. So all the
modern advantages of intelligence and control at the edge can be
maintained and new applications and service such as P2P, Web 2.0, etc can
be deployed by users without getting permission of the owners of the
underlying substrate.

Next Generation Internet, Global Warming and the Consumer

The second challenge for the ICT research community is to how use ICT
technologies to enable the average consumer reduce their carbon footprint.

Governments around the world are wrestling with ways to get their citizens
to reduce carbon dioxide emissions. The current preferred approaches are
to impose “carbon” taxes and/or implement various forms of cap and trade
systems. However another approach to help reduce carbon emission is to
“reward” those who reduce their carbon footprint rather than imposing
draconian taxes or dubious cap and trade systems. Consumers will
generally change their behaviour and respond more positively to voluntary
reward mechanisms as opposed to mandatory solutions imposed by government
or other authorities.

But what reward mechanisms can we use that will encourage consumers to
reduce their carbon dioxide emissions and yet in themselves not also
create a significant carbon footprint?

As it turns out “bits” are almost costless in terms of their carbon
footprint. The carbon dioxide emissions of making one digital copy of a
piece of music or video is virtually no different than making one million
copies of the same material.

Perhaps digital information and knowledge in terms music, video and myriad
list of applications and services delivered over the Internet should be
the reward mechanism and new currency for reducing carbon emission.

So how do we effect a process of reducing carbon dioxide emissions in
transportation and heating in exchange for delivery of valuable carbon
free products and services over the Internet? What are the new economic
models, business arrangements and network architectures and services that
will be necessary to effect these transactions of reducing “carbon heavy”
energy products for “carbon light” virtual services and products?

One model that has been proposed (Green Broadband) is to provide consumers
with free high speed Internet in exchange for paying a higher premium on
their energy and gas bill – but with the added incentive of encouraging
the customer to reduce their energy consumption with no penalty. And as
we know from Economics 101 the surest way to reduce consumption of a
precious resource is to increase its price. So the additional premium
consumers would pay for on their energy bill would be an incentive to
reduce consumption, and yet if they do so be rewarded with their free high
speed Internet.

Other models include consumers voluntarily paying a premium at the gas
pump when they fill up their car, in order to receive free cell phone
service, or download unlimited MP3 songs to their iPOD.

There endless number of creative possibilities.

For those who are interested information on this topic are welcome to
attend my talks at TRlabs in Edmonton on Friday November 9th or Canadian
Urban Institute on Friday Nov 23

TRLabs Building the Next Generation Internet
http://www.trlabs.ca/trlabs/about/events/ngiwkshp_11092007.html

Canadian Urban Institute
Green Broadband and the Digital Divide
http://www.canurb.com/

Green Broadband
http://green-broadband.blogspot.com

Grids and virtualization can help reduce carbon dixoide emissions

 [There are a number of carbon credit and trading companies that are being
established to measure and audit energy savings and market these as carbon
credits. These carbon credits can be earned from promoting tele-commuting,
reduced air travel, consolidating servers etc etc. This is likely to be a
growing market and offers new commercialization opportunities for academia
and businesses in develop SOA and mashups on networks for the auditing,
automatic trading, of the carbon credit etc. I suspect this will propel
organizations to move to grids and virtual servers from Amazon and the like.
>From a posting on Slashdot. Some excerpts --BSA]




http://www.computerworld.com/action/article.do?command=viewArticleBasic&arti
cleId=9045278&intsrc=news_ts_head

IBM to let customers sell server energy savings on carbon markets
Another financial incentive for reducing power in data centers

November 01, 2007 (Computerworld) -- IBM will announce Friday a program that
will make it possible for its customers to document server energy savings --
and even trade them for cash, if they want, on emerging carbon markets.

How it works: If you take distributed systems -- for instance, x86 servers
-- and consolidate them on a mainframe, the move will result in an energy
savings. Those savings can be calculated based on reference data, a task
that will fall to Neuwing Energy Ventures, an independent firm verifying and
trading in energy efficiency certificates.

More specifically, IBM said its ongoing consolidation of 3,900 distributed
systems onto 33 mainframes will eventually save the company 119,000 megawatt
hours annually. One energy efficiency certificate is issued for each
megawatt hour saved per year.

In IBM's example, the certificates would have an estimated value of between
$300,000 and $1 million based on market conditions, said Rich Lechner, IBM's
vice president of IT optimization. The certificates can be issued for each
year of the life of the project.


IBM isn't alone in providing a financial incentive for energy efficiency.
Pacific Gas and Electric Co., for instance, is working with major utilities
to expand a program that pays a company between $150 and $300 per server
removed from service. The utility has been encouraging its customers to
adopt virtualization to increase server utilization.

Under IBM's program, a company could keep its energy certificates and use
them simply as proof of corporate responsibility. But other companies might
sell these certificates on one of the emerging carbon markets.

Thursday, November 1, 2007

How Web 2.0 and SOA could help save electricity

[This a good example of the power and flexibility of SOA and Web 2.0. These tools allow integration with various network services as well. Inocybe, for example, allows web services to interconnect power devices with network services -- www.inocybe.ca and is ideal for integration with Green Broadband initiatives as mentioned in my previous post. Some excerpts from NetworkWorld article--BSA


http://www.networkworld.com/supp/2007/ndc6/102207-pnnl-ibm-soa-case-study.ht
ml?netht=102307dailynews2&&nladname=102307dailynews


Researchers at the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL) in Richland, Wash., decided to find out. With IBM as a partner, they built a demonstration network called GridWise that showed how an event-driven service-oriented architecture (SOA) can be used to build a power marketplace that lets residential and commercial customers change their electricity consumption nearly in real time, based on price and other factors. During the yearlong, Energy Department-sponsored marketplace demonstration, customers spent less money on power, and utilities easily accommodated spikes in demand without affecting service levels.

The marketplace, an SOA application ran on an IBM WebSphere Application Server at PNNL and received data in real time from various Web services about electricity's current wholesale price and most recent closing price, as well as whether those prices were trending up or down. It communicated with specialized, "smart" appliances at participants' sites via IBM-developed middleware built within what IBM calls its event-driven architecture (EDA) framework and running on the WebSphere server. The EDA middleware provided the link between the transaction-oriented marketplace and the more physical world of the controls-based appliances. "Using event-based programming, we bridged between the control-systems world and the SOA-transaction world," says Ron Ambrosio, manager of Internet-scale control systems at IBM. "It let us build applications that are more control-like."

Via Web services, the virtual thermostats would bid a certain price into the marketplace based on the current temperature in the house, what the user's preferences were, and how responsive they wanted to be to changing prices.

Every five minutes, the marketplace would take those bids and determine a new clearing price for electricity. The new price would then flow out from the SOA marketplace through an event bus to all the virtual devices, kicking off their reaction.

In fact, Pratt estimates that adopting an SOA-EDA market-based approach across the United States could result in huge savings in power-grid infrastructure. "We are going to build a half a trillion dollars of new generation, transmission and distribution facilities in the United States in 20 years just to meet the load growth of our population and economy," he says. "And we can save at least 10%, maybe 20%, of that investment with these distributed, Internet-type control approaches." .

Cisco's Connected Urban Development to Reduce Carbon Emissions



[I am very excited to hear news of Cisco's Connected Urban Development Initiative and their commitment, in partnership with MIT and the cities of San Francisco, Seoul and Amsterdam, to spend $15 million in support of advanced network solutions to reduce carbon dioxide emissions. This is very much in line with the Green Broadband initiative referenced in my earlier postings. Unfortunately there does not yet exist a web site describing Cisco's initiative. But there is some excellent background material publicly available from either myself or Nicola Villa of Cisco on this initiative. Thanks to Nicola Villa of Cisco -- BSA]

Green Broadband
http://green-broadband.blogspot.com/

Cisco's Connected Urban Development http://www.clintonglobalinitiative.org/NETCOMMUNITY/Page.aspx?&pid=513&srcid
=395

http://newsroom.cisco.com/dlls/2006/ts_092106.html?CMP=ILC-001



Connected Urban Development to Reduce Carbon Emissions, 2006 Objective

-Reducing Global warming with smart environmentally, friendly cities while driving social, economical and environmental values

-An urban communications infrastructure makes the flow of information, people, traffic and energy more efficient

- Showcase how broadband collaborative networks and advanced technologies can transform sustainable cities

- Funding support for thought leadership and proof-of-concepts


Cisco
Commitment Details

* Estimated Total Value $15,000,000
* Commitment Duration 5 years



* Anticipated Launch January 1, 2007
* Geographic Scope Seoul,Amsterdam,San Franscisco
* Geographic Region Global

Blog Archive