Telecom and networking trends 2013

One of the big trends of 2013 and beyond is the pervasiveness of technology in everything we do – from how we work to how we live and how we consume.

Worldwide IT spending increases were pretty anemic as IT and telecom services spending were seriously curtailed last year. It seems that things are going better. Telecom services spending, which has been curtailed in the past few years, only grew by a tenth of a point in 2012, to $1.661tr, but Gartner projects spending on mobile data services to grow enough to more than compensate for declines in fixed and mobile voice revenues. Infonetics Research Report sees telecom sector growth outpacing GDP growth. Global capital expenditure (capex) by telecommunications service providers is expected to increase at a compounded rate of 1.5% over the next five years, from $207 billion in 2012 to $223.3 billion in 2017, says a new market report from Insight Research Corp.

Europe’s Telco Giants In Talks To Create Pan-European Network. Europe’s largest mobile network operators are considering pooling their resources to create pan-European network infrastructure, the FT is reporting. Mobile network operators are frustrated by a “disjointed European market” that’s making it harder for them to compete.

crystalball

“Internet of Things” gets new push. Ten Companies (Including Logitech) Team Up To Create The Internet Of Things Consortium article tell that your Internet-connected devices may be getting more cooperative, thanks to group of startups and established players who have come together to create a new nonprofit group called the Internet of Things Consortium.

Machine-to-Machine (M2M) communications are more and more used. Machine-to-machine technology made great strides in 2012, and I expect an explosion of applications in 2013. Mobile M2M communication offers developers a basis for countless new applications for all manner of industries. Extreme conditions M2M communication article tells that M2M devices often need to function in extreme conditions. According to market analysts at Berg Insight, the number of communicating machines is set to rise to around 270 million by 2015. The booming M2M market is due to unlimited uses for M2M communications. The more and more areas of life and work will rely on M2M.

Car of the future is M2M-ready and has Ethernet. Ethernet has already been widely accepted by the automotive industry as the preferred interface for on-board-diagnostics (OBD). Many cars already feature also Internet connectivity. Many manufacturers taking an additional step to develop vehicle connectivity. One such example is the European Commission’s emergency eCall system, which is on target for installation in every new car by 2015. There is also aim of Vehicle-to-Vehicle communications and Internet connectivity within vehicles is to detect traffic jams promptly and prevent them from getting any worse.

M2M branches beyond one-to-one links article tells that M2M is no longer a one-to-one connection but has evolved to become a system of networks transmitting data to a growing number of personal devices. Today, sophisticated and wireless M2M data modules boast many features.

The Industrial Internet of Things article tells that one of the biggest stories in automation and control for 2013 could be the continuing emergence of what some have called the Internet of Things, or what GE is now marketing as the Industrial Internet. The big question is whether companies will see the payback on the needed investment. And there are many security issues that needs to be carefully weighted out.

crystalball

Very high speed 60GHz wireless will be talked a lot in 2013. Standards sultan sanctifies 60GHz wireless LAN tech: IEEE blesses WiGig’s HDMI-over-the-air, publishes 802.11ad. WiFi and WiGig Alliances become one, work to promote 60GHz wireless. Wi-Fi, WiGig Alliances to wed, breed 60GHz progeny. WiGig Alliance’s 60GHz “USB/PCI/HDMI/DisplayPort” technology sits on top of the IEEE radio-based communications spec. WiGig’s everything-over-the-air system is expected to deliver up to 7Gbit of data per second, albeit only over a relatively short distance from the wireless access point. Fastest Wi-Fi ever is almost ready for real-world use as WiGig routers, docking stations, laptop, and tablet were shown at CES. It’s possible the next wireless router you buy will use the 60GHz frequency as well as the lower ones typically used in Wi-Fi, allowing for incredibly fast performance when you’re within the same room as the router and normal performance when you’re in a different room.

Communications on power line still gets some interest at least inside house. HomePlug and G.hn are tussling it out to emerge as the de-facto powerline standard, but HomePlug has enjoyed a lot of success as the incumbent.

Silicon photonics ushers in 100G networks article tells that a handful of companies are edging closer to silicon photonics, hoping to enable a future generation of 100 Gbit/s networks.

Now that 100G optical units are entering volume deployment, faster speeds are very clearly on the horizon. The push is on for a 400G Ethernet standard. Looking beyond 100G toward 400G standardization article tells that 400G is very clearly on the horizon. The push is now officially “on” for 400-Gigabit Ethernet standard. The industry is trying to avoid the mistakes made with 40G optics, which lacked any industry standards.

Market for free-space optical wireless systems expanding. Such systems are often positioned as an alternative to fiber-optic cables, particularly when laying such cables would be cost-prohibitive or where permitting presents an insurmountable obstacle. DARPA Begins Work On 100Gbps Wireless Tech With 120-mile Range.

914 Comments

  1. Tomi Engdahl says:

    The MOST auto connectivity spec is on the way out as new unshielded single twisted-pair cable arrives for auto Ethernet (compare it with regular Ethernet cable on the left). This means automakers can leverage the ubiquitous Ethernet standard while reducing the connectivity cost and cabling weight.

    Source: http://www.designnews.com/author.asp?section_id=1386&cid=NL_Newsletters+-+DN+Daily&doc_id=257345&image_number=8

    Reply
  2. Tomi Engdahl says:

    Cars are becoming the fourth platform (after the TV, PC, and smartphone) for CE companies.

    Source: http://www.designnews.com/author.asp?section_id=1386&cid=NL_Newsletters+-+DN+Daily&doc_id=257345&image_number=5

    Reply
  3. Tomi Engdahl says:

    Embedded with sensors (and CSR’s wireless chip), these Nike shoes can track every move on the court and wirelessly sync stats on an Apple iOS device.

    Source: http://www.designnews.com/author.asp?section_id=1386&cid=NL_Newsletters+-+DN+Daily&doc_id=257345&image_number=11

    Reply
  4. Tomi Engdahl says:

    Internet 2012 in numbers
    http://royal.pingdom.com/2013/01/16/internet-2012-in-numbers/

    How many emails were sent during 2012? How many domains are there? What’s the most popular web browser? How many Internet users are there? These are some of the questions we’ll answer for you.

    What about the Internet in 2013?

    Just a couple of weeks into 2013 we don’t yet know much about what the year ahead has in store for us. However, we can perhaps make a few predictions: we will be accessing the Internet more with mobile devices, social media will play an increasingly important role in our lives, and we’ll rely even more on the Internet both privately as well as professionally.

    Reply
  5. Tomi Engdahl says:

    The 787 Dreamliner Scenario: How Data Can Solve Epic Messes
    http://slashdot.org/topic/bi/the-787-dreamliner-scenario-how-data-can-solve-epic-messes/

    Sensors linked to analytics and diagnostic software could help companies like Boeing solve crises more quickly.

    Following reports of battery failures onboard Boeing’s 787 Dreamliner, the Federal Aviation Administration (FAA) has issued an “emergency airworthiness directive” temporarily grounding the airliners.

    Boeing relies on massive amounts of collected data to improve its manufacturing and maintenance efforts, of course. As part of that process, it moves 60 petabytes of data around its network

    It has also experimented with launching cloud platforms, most notably with low-disk data projects such as an e-commerce Website

    And at the moment, Boeing has a big data job on its hands: figure out the root of the battery issue—and how that issue might affect the 787’s other critical systems. Fortunately, modern aircraft come seeded with lots of sensors; combined with a trove of other manufacturing and maintenance data, it’s likely the company can soon find a solution.

    A couple months ago, General Electric sent its executives to several high-profile tech conferences to pitch the idea of an “Industrial Internet,” which combines the latest in analytics tools and data-gathering sensors with old-school manufacturing. By seeding their fleets of machines and vehicles with sensors, GE argued, companies could receive massive amounts of data about every stage of their operations. Run that data through the appropriate analytics packages, and those companies could squeeze out considerable efficiencies.

    If that vision comes to pass—and GE, along with other firms, manages to convince enough industries to sprinkle their products with data-gathering sensors—it could make troubleshooting a somewhat more streamlined process.

    Reply
  6. Tomi Engdahl says:

    Microsoft pushes ahead with its own take on WebRTC
    http://gigaom.com/2013/01/17/microsoft-cu-webrtc-prototype/

    Microsoft published a first prototype for plugin-free video chat in the browser Thursday. However, it’s a bit different from what Google and others have in mind.

    Microsoft’s Open Technologies unit published a prototype implementation of browser-based video chat today that allows a user of a Mac OS-based Chrome browser to chat with a user running IE 10 on Windows.

    Previous efforts around web-based, plugin-free voice and video chat were largely driven by Mozilla and Google, with the latter contributing a lot of its technology to an effort dubbed WebRTC, which is short for web-based real-time communications. Work on WebRTC had been progressing in 2012, and parts of the technology has already been implemented in Chrome and Opera.

    However, Microsoft argued that some of the core assumptions of that approach were wrong. In particular, the company took issue with efforts to make Google’s VP8 video codec the default choice for WebRTC. Microsoft’s own proposal, dubbed CU-WebRTC, would instead leave it up to the developer of each app to settle on a codec as well as on other specifics, like the data formats used to communicate.

    Microsoft’s initial proposal wasn’t exactly embraced by everyone

    However, the trio believes that the industry is nonetheless making progress towards a common standard

    Reply
  7. Tomi Engdahl says:

    Are Telepresence Robots Becoming the Norm for Companies With Work-at-Home Employees?
    http://www.designnews.com/author.asp?section_id=1386&doc_id=257159&cid=NL_Newsletters+-+DN+Daily

    Not too long ago, employees from various companies and institutions were able to perform their jobs from the comfort of their own homes. Typically this was accomplished with a combination of devices such as a PC, webcam, and phone to complete their tasks. However, it was a rare few who experienced that world.

    Things have changed since then. With advances in mobile communications and robotics, employees (and even students) can now use a unique form of telepresence that lets them work from home while maintaining a physical presence in the workplace. Wireless and robotic tech combined, not only give both employers and employees the ability to perform tasks, but also give co-workers the feeling that they’re actually at work with one another. The last five years have truly brought the perspective of using robots in both the workplace and learning institutions.

    This trend looks to gain even more momentum. Although there are about a dozen or so telepresence robotic manufacturers at this time, the actual number of businesses or institutions that employ them is sparse

    Reply
  8. Tomi Engdahl says:

    Lithium-Ion Batteries Emerge as Possible Culprit in Dreamliner Incidents
    http://www.designnews.com/document.asp?doc_id=257519&cid=NL_Newsletters+-+DN+Daily

    A succession of problems has plagued Boeing’s 787 Dreamliner, but investigators are now most concerned about incidents involving overheating of lithium-ion batteries.

    Federal Aviation Administration (FAA) officials grounded Boeing’s high-tech Dreamliner after battery electrolytes reportedly leaked from a lithium-ion battery onboard an All Nippon Airways flight on Wednesday. The liquid reportedly traveled through an electrical room floor to the outside of the aircraft, leaving burn marks around damaged areas.

    The incidents did motivate aviation authorities around the world to order stoppage of Boeing 787 flights, however. The FAA also announced it will work with Boeing engineers to conduct a comprehensive review of the 787′s design and manufacture, with an emphasis on the aircraft’s electrical power and distribution systems.

    Most of the concern around the 787 involves the use of lithium-ion batteries. The 787 is the first commercial aircraft to employ them. Its electrical architecture also operates at a higher voltages than predecessors, experts told us. “In terms of the ancillary systems, almost everything on the 787 is electrical,” Freiwald said. “Most aircraft systems operate at 115V AC, whereas this is a 230V system. It’s a pretty substantial amount of power and current.”

    Aviation experts said the energetic quality of lithium-ion can be a concern onboard aircraft. “One of the issues with lithium batteries is they get very hot,” Freiwald said. “When they ignite, they can burn so hot that Halon 1301 won’t extinguish a fire.”

    Automakers, many of whom use lithium-ion chemistries in hybrids and electric cars, typically operate their batteries with cooling systems.

    Even with cooling, however, lithium-ion automotive batteries have been known to have problems on rare occasions.

    Reply
  9. Tomi Engdahl says:

    Why no one wants to Joyn GSMA’s Skype-killing expedition
    http://www.theregister.co.uk/2013/01/18/joyn_joyn/

    Operators are bleeding revenue to over-the-top players, and pinning their hopes on the GSMA-based Joyn standard, but a year after launch platform developer OpenCloud thinks the GSMA might be the problem rather than the solution.

    culture of internationally agreed standards and glacial accreditation is fatally slowing development of operator solutions, putting them at the mercy of internet companies who will inevitably out-innovate and reduce operators to the status of bit pipes.

    Joyn, also known as Rich Communication Services (RCS) is the network operators’ answer to Skype, Viber and all the other OTT players that are denying them the voice revenue upon which they still depend, but getting (or keeping) customers on board will take more than replication of existing services.

    “They’ve launched services no better than [those of] OTT players,” Windle pointed out, referring to the Joyn product in Spain which now works across Orange, Telefonica and Vodafone. “RCS is bringing virtually nothing, and it has taken them five years to do it.”

    But that’s what operators have always done: worked together to ensure interoperability, as interoperability has always been so essential to their business. But no one expects to be able to Facetime a Skype ID, or Yahoo a Google+ account, so interoperability obviously isn’t as important as it once was.

    So as long as operators all play nicely together in the GSMA, any initiative to combat the OTT players is inherently doomed, which is bad news for the network operators

    Reply
  10. Tomi Engdahl says:

    France Proposes an Internet Tax
    http://www.nytimes.com/2013/01/21/business/global/21iht-datatax21.html?pagewanted=all&_r=0

    France, seeking fresh ways to raise funds and frustrated that American technology companies that dominate its digital economy are largely beyond the reach of French fiscal authorities, has proposed a new levy: an Internet tax on the collection of personal data.

    companies gather vast reams of information about their users, harnessing it to tailor their services to individuals’ interests or to direct customized advertising to them.

    “They have a distinct value, poorly reflected in economic science or official statistics,”

    “We want to work to ensure that Europe is not a tax haven for a certain number of Internet giants,” the digital economy minister, Fleur Pellerin, told reporters in Paris on Friday.

    Reply
  11. Tomi Engdahl says:

    Telecom Economic Forecast, 2013-2014
    http://www.forbes.com/sites/billconerly/2012/11/19/telecom-economic-forecast-2013-2014/

    Telecom spending is leveling off, more so than most other types of consumer expenditures. Consumers cut back on their telecom spending in the recession by slowing their growth of cellular service while accelerating cutbacks to land lines.

    What does the future of telecom hold? The component trends will no doubt continue: reduction of land line spending and increasing penetration of smart phones and tablets. The magnitude of the spread of smart phone and tablet usage will dictate the trend in telecom spending.

    Reply
  12. Tomi Engdahl says:

    Our top ten predictions for the telecoms market in 2013
    http://www.analysysmason.com/About-Us/News/Insight/Top-10-telecoms-predictions-Jan2013/

    In 2013, roll-out of LTE services will have limited economic impact initially, social media giants look set to stir up IP-based messaging services and smartphone penetration growth rates will slow considerably.

    Reply
  13. Tomi Engdahl says:

    India Bars ZTE, Huawei, Others From Sensitive Government Projects
    http://yro.slashdot.org/story/13/01/22/021253/india-bars-zte-huawei-others-from-sensitive-government-projects

    “The Indian Government has decided it won’t be using telecom equipment from international vendors, and has barred all such foreign companies from participating in the US$3.8 billion National Optical Fiber Network (NOFN) project — a project aimed at bringing high-speed Internet connectivity to the rural areas of India.”

    “released a list of certified GPON suppliers”

    Reply
  14. Tomi Engdahl says:

    Netflix Accused Of Bullying ISPs And Discriminating Against Users

    Time Warner Cable has accused Netflix of bullying Internet Service Providers (ISPs) and discriminating against their own users. The new Open Connect Content Delivery Network (CDN) is what is getting Time Warner Cable so upset and they believe that Netflix is in the wrong for how they implemented this new CDN program last year.

    Time Warner Cable (TWC) says that requiring companies to participate in the Open Connect program is discriminatory to both companies and Netflix users

    Read more at http://www.inquisitr.com/490128/netflix-accused-of-bullying-isps-and-discriminating-against-users/#mqT3ZUsXiIM8xAo5.99

    Reply
  15. Tomi Engdahl says:

    Ovum finds vendors’ struggles to integrate IT and network data is hindering telco’s 360-degree customer view
    http://ovum.com/press_releases/ovum-finds-vendors-struggles-to-integrate-it-and-network-data-is-hindering-telcos-360-degree-customer-view/

    A holistic view of the telco customer across siloed applications is essential for operators looking to reduce churn, but the majority of BI and analytics solutions on the market today are not up to scratch, says Ovum.

    In its latest evaluation of the BI vendor landscape for telcos, Ovum concludes that realising value from BI and analytics will require considerable investment by both vendors and telcos, particularly in interoperability and data integration.

    Reply
  16. Tomi says:

    Smart networks: coming soon to a home near you
    http://oecdinsights.org/2013/01/21/smart-networks-coming-soon-to-a-home-near-you/

    In 2017 a household with two teenagers will have 25 Internet connected devices. In 2022 this will rise to 50, compared with only 10 today.

    In households in the OECD alone there will be 14 billion connected devices, up from 1.7 billion today and this doesn’t take into account everything outside the household and outside the OECD.

    All this leads to the smart world discussed in a new OECD publication, Building Blocks of Smart Networks.

    Smart networks are the result of three trends coming together (and all being studied by the OECD). Machine to Machine communication means devices connected to the Internet (also known as the Internet of Things). This generates “Big Data” because all those devices will communicate and that data will be processed, stored and analyzed. And to enable the analysis, Cloud Computing will be necessary, because when entire business sectors go from no connectivity to full connectivity within a few years, they will need scalable computing that can accommodate double digit growth. Underlying these trends is the pervasive access to Internet connectivity.

    Reply
  17. Tomi says:

    ICT Applications for the Smart Grid
    Opportunities and Policy Implications
    http://www.oecd-ilibrary.org/science-and-technology/ict-applications-for-the-smart-grid_5k9h2q8v9bln-en

    The smart grid is revolutionizing electricity production and consumption. However, strategic use of ICTs and the Internet in energy innovation requires clarifying the roles of partners coming from distinct industries. And it begs for greater coordination of government departments and stakeholder communities that so far had unrelated competencies. This report outlines opportunities, challenges and public policy implications from shifts to ICT-enabled, “smart” electricity grids.

    Reply
  18. Tomi Engdahl says:

    Wi-Fi appliance power control: Easy, but is it good?
    http://www.edn.com/electronics-blogs/power-points/4405262/Wi-Fi-appliance-power-control–Easy–but-is-it-good-

    We’ve come a long way since the crude but workable family of BSR X-10 home network-based appliance controllers

    Now you can get units such as this WeMo Wi-Fi enabled AC outlet from Belkin, and it comes with system-level software and apps so you can be controlling things in no time—at least in theory.

    There’s no doubt this sort of remotely controllable unit can be pretty handy for all sorts of odds and ends around the house.

    But the law of unintended consequences also shows up in these scenarios. First, what happens when you start having a lot of these units in your house—who is going to program and manage them?

    Second, these solutions assume full-up availability and access to the Internet, usually via Wi-Fi. OK, so what happens when that access is lost, due to power issues, interference, equipment (hardware) failure, or system glitches?

    It’s likely that the power-up status of many of the appliances is not defined or consistent, and likely not what you wanted.

    Reply
  19. Tomi Engdahl says:

    Finnish article Wlan ohjaa yksittäisiä led-loisteputkia (read English translation) tells about LED light tubes can be controlled by a WLAN connection, even individually. Finnish company Valtavalo has licensed Netled control technology from Yashima Dengyo Co., Ltd. and sells their products.

    Source: http://www.epanorama.net/blog/2011/01/07/communicating-led-lamps/

    Reply
  20. Tomi Engdahl says:

    Google Creating Wireless Network, But For What?
    http://blogs.wsj.com/digits/2013/01/23/google-creating-wireless-network-but-for-what/

    Google is trying to create an experimental wireless network covering its Mountain View, Calif., headquarters, a move that some analysts say could portend the creation of dense and superfast Google wireless networks in other locations that would allow people to connect to the Web using their mobile devices.

    First, the facts: Google last week submitted an application to the Federal Communications Commission, asking for an experimental license to create an “experimental radio service” with a two-mile radius covering its headquarters.

    A Google spokeswoman on Wednesday declined to comment on the purpose of the application, saying the company regularly experiments with new things.

    According to the application, first spotted by wireless engineer Steven Crowley, Google said it would be using wireless frequencies that are controlled by Clearwire Corp., a wireless broadband provider.

    “The only reason to use these frequencies is if you have business designs on some mobile service,” Crowley said.

    Clearwire on Wednesday declined to say whether it was working with Google on the trial.

    Reply
  21. Tomi Engdahl says:

    Report finds 10G transceiver sales outpacing 40/100G
    http://www.cablinginstall.com/articles/2012/12/lightcouing-optical-transceiver-report.html

    As reported at Cablinginstall.com’s sister site, Lightwave, year 2012 hasn’t been so bad for optical transceiver sales, earnings statements be damned, asserts the fiber-optic communications industry analysis firm LightCounting in a new report.

    By the end of 2012, sales of 40/100G optical transceivers will have doubled, claims the firm. However, the analysis finds that 10 Gigabit Ethernet modules have represented the lion’s share of the market in 2012, accounting for more than 50% of sales. LightCounting says that 100 Gigabit Ethernet sales could exceed those of 10 Gigabit Ethernet devices by 2017 – provided transceiver developers succeed in creating and offering modules with smaller form factors and lower power consumption.

    Reply
  22. Tomi Engdahl says:

    40GBase-T promises excitement
    http://www.cablinginstall.com/articles/print/volume-21/issue-1/departments/4-gbase-t-promises-excitement.html?cmpid=EnlContractorJanuary242013

    In December we reported on some of the initial efforts by the Telecommunications Industry Association (TIA) to produce a set of Category 8 twisted-pair cabling specifications for the support of 40GBase-T

    The TIA group working on the Category 8 specifications has ambitious plans concerning the amount of progress it expects to make in 2013.

    The development of Category 8 is in keeping with the successful efforts through which a set of cabling-performance parameters is developed in tandem with an Ethernet transmission protocol.

    TIA’s actions, in the form of its decision to move forth with a 40GBase-T-aligned Category 8, make a statement.

    Reply
  23. Tomi Engdahl says:

    Tech guide: Securing wiring closets with Cisco Catalyst switches
    http://www.cablinginstall.com/articles/2012/12/cisco-catalyst-guide.html

    A recent technical paper from Cisco explains for network technicians how to secure wiring closets using the company’s ubiquitous Catalyst switches.

    Moreover, the paper presents the wiring closet switching infrastructure as the first line of defense for campus networks to protect an organization’s data, applications, and the network itself — and that the features that enable this defense are a critical part of the entire enterprise.

    Reply
  24. Don Romaine says:

    Your mode of describing the whole thing in this post is actually nice, all be capable of without difficulty know it, Thanks a lot.|

    Reply
  25. Tomi Engdahl says:

    Finally, TI is producing simple, cheap WiFi modules
    http://hackaday.com/2013/01/12/finally-ti-is-producing-simple-cheap-wifi-modules/

    Texas Instruments is releasing a very inexpensive, very simple WiFi module specifically designed for that Internet of Things.

    The TI SimpleLink TI CC3000 WiFi module is a single-chip solution to putting 802.11b/g WiFi in just about every project you can dream up. Just about everything needed to put the Internet in a microcontroller is included in this chip – there’s a TCP/IP stack included on the chip, along with all the security stuff needed to actually connect to a network.

    The inexpensive micocontroller WiFi solutions we’ve seen – including the very cool Electric Imp – had difficult, or at least odd, means of putting WiFi credentials such as the SSID and password onto the device. TI is simplifying this with SmartConfig

    CC3000 only costs $10 in quantities of 1000

    Check also:
    http://hackaday.com/2013/01/27/a-breakout-board-for-a-tiny-wifi-chip/

    Reply
  26. Tomi Engdahl says:

    ‘Exponential growth’ in 10GBase-T switch ports forecasted
    http://www.cablinginstall.com/articles/2013/january/crehan-forecast-data-center-switch.html

    In a new report, Crehan Research predicts that the data center switch market will approach $16 billion by 2017. As recorded by Cablinginstall.com’s sister site, Lightwave, the report shows that Ethernet, including Fibre Channel-over-Ethernet, will become an ever-increasing portion of the overall market. Furthermore, within the Ethernet segment, Crehan also anticipates very strong growth for both 10GBase-T switches and 40 Gigabit Ethernet-capable switches (40GbE).

    Crehan believes the strong ramp in 40GbE will be driven by the following factors. First, the upgrade to 10 Gigabit Ethernet (10GbE) switches in the server access layer should drive 40GbE deployments in the uplink, aggregation, and core sectors of data center networks. Second, 40GbE, with its QSFP interface, also can be used as four individual 10GbE links, which not only provides very high 10GbE switch port density but also gives uplink/downlink and oversubscription/wire-speed flexibility.

    The report predicts robust increases for 100 Gigabit Ethernet switches (100GbE), but indicates that while 100GbE will likely be an important long-term data center switch technology, prices and port densities have a way to go before it achieves a meaningful market impact.

    Reply
  27. Tomi Engdahl says:

    Disruptive technology alert: Self-assembling silica microwires could supercede optical fibers
    http://www.cablinginstall.com/articles/2013/january/silica-microwires.html

    Silica microwires are the tiny and as-yet underutilized cousins of optical fibers. If they could be precisely manufactured, these slivers of silica could enable applications and technology not currently possible with the relatively larger optical fiber, says a team of researchers from Australia and France who recently reported their efforts to meet this goal.

    By carefully controlling the shape of water droplets with an ultraviolet laser, the researchers have now found a way to coax silica nanoparticles to self-assemble into much more highly uniform silica wires. The international team describes their novel manufacturing technique and its potential applications in a paper published in the Optical Society’s (OSA) open-access journal Optics Materials Express.

    “We’re currently living in the ‘Glass Age,’ based upon silica, which enables the Internet,” said John Canning, team member and a professor in the school of chemistry at The University of Sydney in Australia. “Silica’s high thermal processing, ruggedness, and unbeatable optical transparency over long distances equate to unprecedented capacity to transmit data and information all over the world.”

    Silica microwires, if they could be manufactured or self-assembled in place, have the potential to operate as tiny optical interconnects. Unlike optical fiber, silica microwires have no cladding, which means greater confinement of light in a smaller structure better suited for device interconnection, further minimizing losses and physical space. “So we were motivated to solve the great silica incompatibility problem,” explained Canning.

    Reply
  28. Tomi Engdahl says:

    Gain network efficiency, reduce cost per bit with 40G ATCA
    http://www.eetimes.com/design/communications-design/4406144/Gain-network-efficiency–reduce-cost-per-bit-with-40G-ATCA

    Mobile operators have a dilemma. They need to expand network capacity to meet ever-increasing customer expectations for bandwidth, but they must also reduce cost per bit and preserve margin. The recent proliferation of smartphones and consequent explosion of mobile video has upped the ante exponentially for mobile network infrastructure.

    To remain competitive and profitable, mobile operators must invest in increased capacity and pay close attention to the ways in which next-generation advancements can help reverse flattening revenues caused by the upsurge of mobile video.

    At the forefront of advancement in mobile infrastructure is Long Term Evolution (LTE), which works in tandem with existing 2G and 3G networks to allow substantially more bandwidth to users (up to 100Mbit/s) at a reduced cost per bit. 3G and LTE networks are supported by a relatively mature and proven platform, Advanced Telecom Computing Architecture (ATCA), which facilitates 3G wireless infrastructure and IP Multimedia Subsystems (IMS). Now with the rollout of LTE, the latest 40G ATCA technology addresses the need for higher bandwidth at a lower cost per bit, building upon a widely adopted platform in the 3G and more advanced LTE packet core systems to dramatically increase performance and system capacity. Companies employing 10G ATCA systems used in 2G and 3G networks can quadruple their data bandwidth per blade by incorporating 40G blades in the chassis and backplane.

    High-performance 40G ATCA systems require more advanced hardware integration of these building blocks to meet performance and thermal requirements.

    40G ATCA platforms provide significant performance gains over their 10G counterparts. When using a 40G platform, the capacity of a 16-slot chassis increases from 154 Gbit/s to 574 Gbit/s.

    Considering the market shift to near-insatiable demand for data-consuming Over-the-Top (OTT) services like mobile video, reducing cost per bit is top-of-mind for today’s network operators. 40G ATCA allows operators the capacity and efficiency to both meet this demand and prepare for future technological growth in a cost effective way.

    The T40 40G ATCA platform by Radisys, for example, is designed to decrease cost per bit by more than 50 percent and provide mobile network operators with the increased network performance to accommodate the high access speeds inherent in the launch of LTE.

    Reply
  29. Tomi Engdahl says:

    OIF sets sights on 400G module technology
    http://www.cablinginstall.com/articles/2013/january/oif-400g.html

    The Optical Internetworking Forum (OIF) met in New Orleans last week to evaluate the need for technology-focused projects on 400G modules.

    At the New Orleans meeting, the OIF’s Physical and Link Layer Working Group announced that it has started a new project to define a module interface Implementation Agreement targeting 400G long-haul transmission.

    “There is nothing in the standards market today to use as a starting point to support 400G module solutions,” said Karl Gass of TriQuint Semiconductor, optical vice chair of the Physical and Link Layer Working Group. “Definition of a 400G long-haul module provides the industry with technology parameters for near-term component development and implementation strategies.”

    “OIF standardization efforts helped make the 100G market a great success in the face of broad competition from 40G,” added Andrew Schmitt, industry analyst for Infonetics, during a lunchtime meeting with OIF members in NOLA. “And the organization will continue to be an important catalyst for accelerating component availability of future technologies such as 400G, pluggable coherent, or direct-detect 100G.”

    Reply
  30. Tomi Engdahl says:

    The Super Bowl Gave the Web the Night Off
    http://allthingsd.com/20130204/the-super-bowl-gave-the-web-the-night-off/

    Yes, you could stream the Super Bowl on the Web yesterday. But most of you didn’t. And while a lot of you were tweeting or Facebooking during the game, a lot of you stayed away from the Web during the game, period.

    Reply
  31. Tomi Engdahl says:

    Busting the DPI Myth: Deep Packet Inspection Provides Benefits to End Users and Operators Alike
    http://rtcmagazine.com/articles/view/102914

    Long suspected as a means of invading privacy by shadowy governmental and non-governmental forces, deep packet inspection has become a vital tool in network security and efficiently allocating the ever growing demand for bandwidth.

    In recent years, exploding data usage, particularly mobile video traffic, has led to a substantial increase in the demand for bandwidth. Video data usage, including over-the-top (OTT) service, will continue to increase exponentially during the next few years, with video projected to be the leading bandwidth drain by a factor of two or more over its nearest bandwidth rival

    Combined with the prevalence of smartphones, almost everyone could potentially be considered a “disproportionate” user, leading operators to turn to deep packet inspection (DPI) as a technique for enhancing network efficiency, prioritizing traffic and helping differentiate levels of service.

    DPI, defined broadly as the ability to collect and utilize network information, provides a sophisticated tool for maximizing broadband service providers’ return on investment (ROI), while also ensuring higher-quality service for users.

    Considering that network operators have both an obligation and a vested interest in providing increasingly robust DPI solutions, devices that work alongside low- to high-density platforms and support a wide array of bandwidth optimization and security services are very useful for fulfilling DPI.

    A high-density ATCA DPI platform provides centralized stateful load balancing and scalable, redundant payloads for PCEF or PGW network applications.

    Reply
  32. Tomi Engdahl says:

    Internet ‘Under Assault’ by Censoring UN, Regulator Says
    http://www.bloomberg.com/news/2013-02-05/un-internet-oversight-should-be-fought-by-u-s-lawmakers-say.html

    nternational proposals to control the Internet will continue after a United Nations conference in Dubai and the U.S. should be ready to fight such efforts, lawmakers and a regulator said.

    “The Internet is quite simply under assault,” Robert McDowell, a member of the U.S. Federal Communications Commission, said yesterday at a joint hearing by three House subcommittees. McDowell, a Republican, warned of “patient and persistent incrementalists who will never relent until their ends are achieved.”

    The U.S. and other nations refused to sign a revised telecommunications treaty at the UN conference in December, saying new language could allow Internet regulation and censorship by governments.

    Reply
  33. Tomi Engdahl says:

    Cisco: Our mobile data appetite doubled in size in 2012 (and it’s getting bigger)
    http://gigaom.com/2013/02/05/cisco-our-mobile-data-appetites-doubled-in-size-in-2012/

    Summary:

    Globally the average mobile user consumed 201 MB a month in 2012. In North America, we binged on more than triple that amount. By 2017, Cisco says, those numbers will increase by a factor of 10.

    Reply
  34. Tomi Engdahl says:

    More People Taking Breaks From Facebook. Time To Worry?
    http://www.forbes.com/sites/jeffbercovici/2013/02/05/more-people-taking-breaks-from-facebook-time-to-worry/

    More than a quarter of Facebook users surveyed in December said they planned to spend less time using the service in 2013 than they did last year, according to a new report by the Pew Internet & American Life Project.

    When you zero in on the youngest cohort of adults, those between age 18 and 29, that proportion rises from 27% to 38%. Meanwhile, a mere 3% of users told Pew they expected to spend more time with Facebook in the coming year.

    Then there’s the trend of “Facebook vacations.” Some 61% of users surveyed said they had voluntarily gone several weeks or longer without logging into the service.

    Now, some caveats. Human beings in general are pretty lousy at predicting their own behavior.

    Still, Facebook knows well that the behavior of the youngest users is often predictive of where things are headed.

    Reply
  35. Tomi Engdahl says:

    Android System Administration Utilities
    http://www.linuxjournal.com/content/android-system-administration-utilities

    Fast forward 15 years to a time where nearly everyone has a smartphone, fast laptop, or a tablet. Switches, routers, servers are more GUI friendly, wireless is in abundance in many buildings. And yet some people still administer their servers the same way: via their desktop, or a bulky laptop (don’t get me started on why laptops just got heavier).

    Now that all of that’s out of the way, let’s get started shall we? I use the applications I will be talking about in this article quite a bit in my system Administration duties on a really small screen. Odds are if you have a tablet, or a bigger screen you shouldn’t have any problems.

    Wifi Analyzer

    Ping & DNS

    Fing

    ConnectBot with Hackers Keyboard

    Android-vnc-viewer

    Yaaic

    Reply
  36. Tomi Engdahl says:

    FedEx’s file-transfer capacity versus the Internet
    http://boingboing.net/2013/02/05/fedexs-file-transfer-capacit.html

    Cisco estimates that total internet traffic currently averages 167 terabits per second.

    FedEx has a fleet of 654 aircraft with a lift capacity of 26.5 million pounds daily. A solid-state laptop drive weighs about 78 grams and can hold up to a terabyte.

    That means FedEx is capable of transferring 150 exabytes of data per day, or 14 petabits per second—almost a hundred times the current throughput of the internet.

    We can improve the data density even further by using MicroSD cards

    So the interesting thing here is the implicit critique of cloud computers. Leave aside the fact that a cloud computer is like a home computer, except that you’re only allowed to use it if the phone company says so.

    Instead, consider for a moment whether streaming — especially wireless streaming — of media that you’re likely to play more than once makes economic or technological sense.

    Increasing the hard drive in your laptop does nothing to the storage capacity of my laptop, but increasing your demands on the wireless spectrum comes at the expense of my use of that same spectrum.

    On the other hand, it’s easy to see why telcos would love the idea that every play of “your” media involves another billable event. Media companies, too — it’s that prized, elusive urinary-tract-infection business model at work, where media flows in painful, expensive drips instead of healthy, powerful gushes.

    Reply
  37. Tomi Engdahl says:

    Socket to ‘em: It’s the HomeGrid vs HomePlug powerline prizefight
    http://www.theregister.co.uk/2013/02/05/feature_powerline_networking_the_next_generation/

    Rival mains LAN standards go mano-a-mano for a place in your home network

    The backers of rival next-generation in-home mains power networking standards may not have come to physical blows in defence of their favoured technologies, but each is no less dismissive of the other for that. Both camps – one in favour of the ITU-T standard, G.hn; the other siding by the HomePlug AV2 specification – claim they are in the ascendant and the natural choice not merely for the now established network bridge market but also for a new role as embedded networking in ‘smart’ devices.

    One of the two, G.hn, was finally fully approved by the International Telecommunications Union in June 2010; part of the standard, the PHY layer, had been ratified the year before. Thirty months on, G.hn has yet to arrive in products that people can actually buy and install.

    According to Forum President Matt Theall – who is also a Technology Strategist at Intel; the chip maker is very keen on HomeGrid technology – getting G.hn to market has been slowed by the accretion of new features since the standard’s ratification.

    Current powerline technology uses single transmitters connected to the Live wires, with the Neutral line completing the circuit. G.hn MIMO adds a second transmitter/receiver pair, this one on the Earth wire, with the Neutral line common to both: two logical circuits on three wires.

    The upshot is greater signal range and performance: a non-MIMO peak PHY rate might jump from 959.12Mb/s to 1918Mb.ps with MIMO turned on, HomeGrid supporters claim.

    But G.hn’s rival, HomePlug AV2, can take advantage of MIMO too. Like its opponent, AV2 also deliver better speeds than previous generations of powerline technology by extending the frequency range over which the data-bearing signals are modulated. In HomePlug AV2’s case, that takes the theoretical maximum bit-rate from HomePlug AV’s 200Mb/s – running in the 1.8-28MHz band – to 500Mb/s using extra frequencies made possible by an optional extension to the technology. The option, part of the IEEE 1901 standard, extends the operational bandwidth to 50MHz. HomePlug AV2 extends the frequency range further, right up to the FM band at 86.13MHz, to lift the theoretical PHY rate to 1Gb/s.

    In addition to Sigma Designs, there is G.hn silicon coming from Marvell, a big HomeGrid supporter.

    there are a fair few service providers in the HomePlug camp too. Many of them have long been putting HomePlug kit into punters’ hands through their home networking initiatives. For these folk, HomePlug AV2 represents a natural upgrade path from AV, or even older versions of the standard: the 14Mb/s HomePlug 1.0 and the 85Mb/s HomePlug Turbo. Let’s not forget, he says, HomePlug AV, like G.hn, is a formal standard, this one set by the IEEE and called 1901.

    The service providers aren’t going to want to have to get their engineers up to speed on G.hn, he says, when they already know HomePlug for powerline, HomePNA for networking over phone cabling and the MoCA standard for the co-ax links that are popular in US homes.

    But there is an advantage for HomeGrid here: G.hn can operate over all three of these media, all connectable through single PHY. Why worry about MoCA and other standards

    But not so the world’s radio hams, who disapprove of powerline technology period, claiming it is a source of electromagnetic pollution in the bands they and others favour, and fear the impact of the next-generation technologies even on FM transmissions. To avoid the FM band, HomeGrid cuts out at 80MHz rather than 86MHz, where HomePlug AV2 draws the line.

    In Europe, the EN50561-1:2012 standard put in place by European electronics regulator CENELEC (Comité Européen de Normalisation Electrotechnique) is intended to deal with this kind of powerline interference issue

    It’s certainly a compromise to allow powerline kit to get away with not meeting the EN55022 baseline EMC standard.

    For its part, the HomePlug Alliance says: “We think EN50561-1 represents a decent compromise amongst the various stakeholders.”

    Reply
  38. Tomi Engdahl says:

    TIA taps Bill Conley for oneM2M
    http://www.eetimes.com/electronics-blogs/other/4406065/TIA-taps-Bill-Conley-for-oneM2M?Ecosystem=communications-design

    Bill Conley of B&B Electronics was tapped by the TIA to participate as a TIA delegate on its newly formed oneM2M. The organization is a combo of seven leading information and communications technology Standards Development Organizations that joined to ensure global functionality of M2M communications systems—oneM2M will develop the technical specs for worldwide M2M communications.

    http://www.onem2m.org/

    Reply
  39. Tomi Engdahl says:

    Cisco builds on connected grid portfolio
    http://www.electronicproducts.com/News/Cisco_builds_on_connected_grid_portfolio.aspx

    New solutions help utilities modernize the grid, manage integration complexity and improve workforce coordination

    Cisco today announced expanded solutions to its Connected Grid portfolio based on the Cisco GridBlocks architecture to help utilities modernize their electrical grid. The new portfolio includes the Cisco Utility Operational Network solution, Cisco Connected Grid Design Suite, and Cisco Incident Response and Workforce Enablement solution.

    These three new solutions help utilities modernize, manage, and improve everyday grid operations, by providing operational network solutions for mission-critical grid monitoring and control systems; unified electrical and communications network modeling and design resources for rapid and reliable grid implementations; and integrated communication solutions to improve coordination between field personnel and enable faster service restoration. The Connected Grid Strategy ties into Cisco’s broader idea of the Internet of Everything, connecting people, processes, data, and things.

    Cisco Utility Operational Network Solution includes a package of architecture, design services and products for utility wide area networks that provides utility operators a highly secure and flexible multi-point network to support grid modernization efforts.

    Cisco Connected Grid Design Suite includes a set of hardware appliances, software applications and services to manage the complexity of new technology integration in substations by providing utility engineers with a unified view of electrical and communications networks

    Reply
  40. Tomi Engdahl says:

    More bad news about broadband caps: Many meters are inaccurate
    http://gigaom.com/2013/02/07/more-bad-news-about-broadband-caps-many-meters-are-inaccurate/

    Summary: An executive at a firm ISPs hire to audit their broadband meters says most of his clients so far haven’t built accurate meters.

    For the 64 percent of Americans whose internet service provider imposes a broadband cap, and for those lucky enough to have a meter, I have some bad news. The president of the firm who audits many of the country’s broadband meters says that he can’t certify the measurements produced by five out of seven of his clients’ meters because they don’t count your bits correctly.

    The other five clients — which Sevcik would not name — have meters that Sevcik views as inaccurate, although not all of them have publicly rolled out their meters. And not all of those clients impose a broadband cap. Sevcik usually expects accuracy on the meters of between plus or minus one percent, but so far these don’t measure up.

    “They are wrong by missing numbers by one way or another — sometimes it’s over reporting, but more frequently the error is under reporting,”

    Also disturbing is the attitude that Sevcik has encountered at some clients with malfunctioning meters. “There’s a general sense by some people, ‘Eh, we under report so we give them a free pass, so why worry about that?’” Sevcik says. “I think one does need to worry because it ruins the overall veracity of the meter. It derails trust in the meter.”

    Last November, AT&T customer Ken Stox drew attention to AT&T’s meters when he couldn’t replicate the ISP’s byte count with his own home testing.

    Building a broadband meter is tough

    As for problems that lead to inaccurate meters, there are several. The first is that many of these meters are bolt-on afterthoughts. A telco or a cable company often uses measurement gear that sits on the subscriber side of the network. The ISPs has to allocate enough resources at that point to track the bits properly, but networks become congested. Then the ISP faces a choice. Does it count all the bits and risk slowing down the network, or does it let the bit count slide and let the rush of packets through?

    Most ISPs err on the side of letting them rush through and a better user experience. But to solve the problem they could dedicate more resources to the counters so they can keep up with peak traffic

    As Sevcik describes it, many of these counters drop the bits into an Internet Protocol Detail Report format. Those reports are generated every 15 minutes.

    Spread that across 10 million subscribers with a goal of doing hourly updates, and suddenly you have 40 million records to process in that hour. That takes servers — in some cases more than the ISP anticipated.

    If building a meter is so much work and consumes so many resources, why have them?

    Those critics generally call caps a way for ISPs to protect their pay TV businesses.

    “I’ve been in the internet business for quite some time … and in that time I’ve had my hand in the design of more than 100 networks and seen a lot in network technology. And what I’ve realized is, as the industry has matured there is an awful lot of talk and decisions made by people — consumers, policy advocates in DC and big companies — that is often based on hype,”

    Reply
  41. Tomi Engdahl says:

    Internet Protocol Detail Record
    http://en.wikipedia.org/wiki/Internet_Protocol_Detail_Record

    In telecommunications, an IP Detail Record (IPDR) provides information about Internet Protocol (IP)-based service usage and other activities that can be used by Operational Support Systems (OSS) and Business Support Systems (BSS). The content of the IPDR is determined by the service provider, Network/Service Element vendor, or any other community of users with authority for specifying the particulars of IP-based services in a given context.

    Reply
  42. Tomi Engdahl says:

    Dell: May the Force 10 (Gigabit Ethernet ) be with your CAT6 cables
    Embeds OpenFlow control freakage in two 10GE switches
    http://www.theregister.co.uk/2013/02/07/dell_s4820t_switch_openflow/

    By acquiring Force 10 Networks back in July 2011, privatizing IT giant Dell moved itself from a maker of low-end switches to a contender for a slice of the top-end market for 10 and 40 Gigabit Ethernet switches that are ever so slowly becoming the backbone of data centers.

    But you have to keep up with the times, and that means providing a 10GBaseT port alternative to SFP+ ports and cables that are commonly required on 10GE gear, and it also means adding OpenFlow management to the switches.

    In other Dell networking news, the company has added support for the OpenFlow v1.0 protocol to its FTOS 9.1 and higher releases of its switch operating system in the Force10 S4810 and Z9000.

    The combination of these two switches are what Force 10 was pitching as a leaf/spine network that could scale to 24,000 servers running at 10GE speeds or 160,000 servers at Gigabit Ethernet speeds. If there ever was a network that might need some OpenFlow traffic shaping, it is something like that.

    No word on when other Dell Force 10 switches will get OpenFlow support.

    Reply
  43. Tomi Engdahl says:

    Voice over LTE rollouts require new ways of thinking about test
    http://www.edn.com/design/test-and-measurement/4406440/Voice-over-LTE-rollouts-require-new-ways-of-thinking-about-test

    LTE is now a market reality and is becoming firmly entrenched as a wireless network standard
    in markets all over the world. A recent report from the GSA (Global Mobile Supplier Association) stated that the number of commercial LTE networks now stands at 105, deployed across 48 countries. It is anticipated there will be 195 deployments by the end of 2013.

    The progress of network development has been rapid. While some operators are still in the trial phase, the first movers to the technology have already deployed commercial ‘voice over LTE’ (VoLTE) services to their subscribers.

    Of course, VoLTE is where the promise of LTE is realized, providing a service that replicates all the features of traditional voice and offers the end-user a superior alternative to OTT VoIP applications, which are now readily available over wireless networks. With VoLTE, operators are able to deliver an end-to-end QoS through the evolved packet core (EPC) as well as over the radio network – which is also improved through MIMO (multiple in and multiple out) antennas.

    This is no small task and provides engineering as well as test and measurement challenges to mobile operators moving rapidly towards VoLTE. The vendor ecosystem has responded with the provision of big data ‘geoanalytics’ platforms that provide an operator with an end-to-end view of user quality of experience across the core and radio access networks.

    Reply
  44. Tomi Engdahl says:

    Ethernet at 40: Its daddy reveals its turbulent youth
    Bob Metcalfe: How Token Ring and ‘IBM’s arrogance’ nearly sank Big Blue
    http://www.theregister.co.uk/2013/02/09/metcalfe_on_ethernet/

    Reply
  45. Tomi says:

    Not so fast: Budget cut wipes out €7bn European broadband fund
    http://gigaom.com/2013/02/08/not-so-fast-budget-cut-wipes-out-e7bn-european-broadband-fund/

    In a massive blow to Europe’s plans of getting everyone – even in rural areas – on at least 30 Mbps by 2020, a $9.36 billion fund for stimulating broadband deployment has been axed.

    The European Union has just agreed on its budget for the years 2014-2020, and it’s the first time that budget has actually been cut. Unfortunately for European broadband projects that were counting on money from a €7 billion ($9.36 billion) central fund

    The cut could potentially hit rural areas and small towns the hardest.

    Reply
  46. Tomi Engdahl says:

    Ethernet at 40: Its daddy reveals its turbulent youth
    http://www.theregister.co.uk/2013/02/09/metcalfe_on_ethernet/

    Ethernet, however, was not the original name of what has now become the world’s networking standard.

    “Prior to that we’d been calling it the ‘Alto ALOHA Network,’ Alto being the PC that we were building, and the ALOHA Network being a packet radio network at the University of Hawaii whose design we admired,” Metcalfe said.

    Metcalfe remains more than a little sensitive about just how much Ethernet owes to ALOHA. “Over the years people have said that Ethernet is just ALOHA network,” he told us. “I wrote a preface to my own book in which I put a paragraph explaining how Ethernet was just like ALOHA network except – and then there was like 20 ‘excepts,’ one of which was that the ALOHA network was not a LAN; it was a WAN. It was to connect the Hawaiian Islands over a distance of 200 miles, and it ran at 4800 bits per second.” Ethernet first ran at 2.94 megabits per second.

    There were other differences, as well. Ethernet, for example, operated within a mile, so its propagation time was five milliseconds; ALOHA’s 200-mile range, of course, operated at much longer time scales. “The parametric values all changed, and therefore the operation of the network changed,”

    But eventually Metcalfe did get the laser printer hooked up to his new LAN, and thanks to the printer’s resident fonts and compression schemes, Ethernet’s throughput was plenty fast enough to accept print jobs.

    That was the genesis of Ethernet: connecting those “revolutionary” desktop PCs, providing access to the ARPANET, and allowing researchers to simply hit P to print. “We were not planning to do the web, or audio, or video,” Metcalfe told us. Those came later. Much later.

    The Reg asked Metcalfe what it was that boosted Ethernet past its early competitors such as Token Ring and ARCnet, and he told us that despite what some said at the time, “They all worked. Various proponents claimed that the other thing didn’t work, ‘And therefore you should use mine.’ That wasn’t at all true. All these things worked.”

    Some of Ethernet’s early detractors, he said, should hang their heads in shame. “In the dark days,” he reminisced, “IBM was paying professors to write papers showing that Ethernet didn’t really work – when it did work – and that Token Ring was better. Those professors should be ashamed of themselves.”

    from his point of view, Ethernet was the nail in the coffin of Big Blue’s dominance

    More to the point, however, was that Ethernet, as he put it, “understood its place,” while the competition did not.

    “Ethernet was developed in the context of the internet with its seven levels of the ISO reference model,” he said. “So we had a job to do at level one and two, and we didn’t burden Ethernet with all the other stuff that we knew would be above us. We didn’t do security, we didn’t do encryption, we didn’t even do acknowledgements.”

    It’s not that Ethernet was incapable of handling acknowledgements, Metcalfe said, it’s just that he and his cohorts wanted to keep things simple. “Ethernet carried packets. Now, they could be acknowledgement packets, or not, whereas the other methods built some sort of elaborate acknowledgement scheme to boost reliability and so on.”

    As a result, “By virtue of knowing our place, we built something that turned out to be faster and cheaper.”

    And more scalable, he believes.

    From Metcalfe’s point of view, the Token Ring v Ethernet war began in the IEEE in 1980 and ’81. One big break was when, in December of 1982, 19 different companies agreed to use a specific Ethernet spec, despite that it hadn’t be approved by the IEEE and wouldn’t be for few more years. “As soon as we shipped product – we shipped for the IBM PC using that standard – that was a big moment,” he said. “But that wasn’t the end of the war.”

    In those days Ethernet still required coax, albeit thin coax. “The twisted-pair came in the middle of the 80s – I’m not sure what the particular day was,” Metcalfe said. “That was one of the advantages that Token Ring had, that it was twisted-pair, so coax was a negative. But as soon as we had twisted-pair,” he laughed, “we were running faster at half the price.

    The first Ethernet LAN created by Metcalfe and his colleagues at PARC ran at 2.94 megabits per second.

    In 1978, Xerox developed X-Wire, an Ethernet LAN that ran at 10Mbps, and Metcalfe’s 3Com began shipping 10Mbps parts in 1981.

    in June 2010 the IEEE approved its standard for 40/100 Gigabit Ethernet.

    Reply
  47. Tomi Engdahl says:

    Exercises to keep your data centre on its toes
    Flatten the structure to stay nimble
    http://www.theregister.co.uk/2012/05/08/ethernet_standards_developments/

    Given the size of networks today, networking should be open to promote interoperability, affordability and competition among suppliers to provide the best products.

    Let’s drill down a little to explore new developments in the ubiquitous Ethernet standard and see how open networking can help you do jobs more efficiently.

    Currently Ethernet networks have a very controlled and directed infrastructure, with edge devices talking to each other via core switches and pathways through the network controlled by a technology called spanning tree.

    This design prevents network loops by ensuring that there is only one path across the network between devices at the edge of the network.

    Air travel infrastructure has been painstakingly built up to enhance safety and stop planes colliding, as well as to take advantage of economies of scale at hub airports.

    The hub-and-spoke design helps airline and airport operators but not passengers. They could get to their destination much faster by not flying through Heathrow and Chicago.

    So too with Ethernet and packets at the Layer 2 level. Data would arrive at its destination more quickly if it could cross the network without having to go up the network tree (“northward”) to the main or core switches and into Layer 3, get processed and then return down the tree (“southward”) to the destination edge device.

    This Layer 3 supervision is an obstacle to packets travelling more directly, east-west as it were, and only in Layer 2 between the edge devices.

    Ethernet is being transformed to provide edge-to-edge device communications within Layer 2 and without direct core switch supervision.

    What is needed is for the network to be virtualised, to have its data traffic and its control or management traffic separated, and to give networking staff the ability to reconfigure the network dynamically, setting up different bandwidth allocations, routing decisions, and so forth.

    With servers, admin staff can spin up virtual machines and tear them down on demand, with no need to install and decommission physical machines.

    Open secret

    There have to be standards to do this, otherwise it won’t be open.

    One approach to overcoming this challenge is the OpenFlow protocol. The idea is that networks should be software-defined and programmable to improve traffic flows and facilitate the introduction of new networking features.

    Reply
  48. Tomi Engdahl says:

    Introducing the 2net™ Platform
    http://www.qualcommlife.com/wireless-health

    The 2net Platform is a cloud-based system designed to be universally-interoperable with different medical devices and applications, enabling end-to-end wireless connectivity while allowing medical device users and their physicians or caregivers to easily access biometric data. With two-way connection capabilities and a broad spectrum of connection options, the 2net Platform will change the way you do business.

    The 2net Platform supports SSL secure communication of data and is FDA listed as a Class I Medical Device Data System (MDDS).

    We designed the cloud-based 2net Platform to provide a high degree of security and reliability to seamlessly store and transfer data once it has been acquired from the medical device so it can be shared with the appropriate audiences, which could include designated health care service companies, providers, payors, pharmaceutical companies, and application and device technology partners. The biometric data is stored in the 2net Platform’s Payment Card Industry, or PCI-compliant, data center. The data is encrypted in motion and at rest, and transmitted to the manufacturers’ interface of choice for the end-user.

    The 2net Hub, one of the four gateways used to access the 2net Platform’s data center, delivers a new dimension of short-range radio flexibility, security and seamless data transfer, while serving as the information highway for machine-to-machine (M2M) connectivity for medical devices into and out of the home. The 2net Hub is an FDA-listed, compact “plug-and-play” gateway that supports Bluetooth, Bluetooth Low Energy, WiFi, and ANT+ local area radio protocols.

    The 2net Hub is also Continua-certified, and supports 3G and 2G cellular communications.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*