Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    Internet Under Fire Gets New Manifesto
    “Personal is Human. Personalized isn’t.”

    New Clues
    https://medium.com/backchannel/internet-under-fire-gets-new-manifests-207a922b459e
    From two Cluetrain authors, Doc Searls and David Weinberger

    Reply
  2. Tomi Engdahl says:

    Build 802.11ah waveforms with a signal generator
    http://www.edn.com/electronics-products/other/4438939/Build-802-11ah-waveforms-with-a-signal-generator?_mc=NL_EDN_EDT_EDN_today_20150318&cid=NL_EDN_EDT_EDN_today_20150318&elq=82c4f92d26024e7ba45a62bc9a2f4cb1&elqCampaignId=22122&elqaid=24850&elqat=1&elqTrackId=9bf0d93343014c33a49f690968551d1e

    Signal Studio from Keysight provides flexible, standard-based waveforms that let you test 802.11ah-enabled IoT (Internet-of-Things) devices. Signal Studio now supports all major versions of the 802.11 standard, including 802.11a/b/g/j/p and 802.11n/ac.

    IEEE 802.11ah is an emerging standard that defines a WLAN system operating in sub 1-GHz, license-exempt bands. Keysight views 802.11ah as a key enabler of IoT devices, relying on power efficient, long range, and scalable Wi-Fi services. Compared to existing 802.11 standards operating in the 2.4 GHz and 5 GHz bands, 802.11ah offers a significantly improved transmission range. This makes it ideal for use in large-scale sensor networks, extended range hotspots, power efficient devices, and outdoor Wi-Fi for cellular traffic offloading.

    After creating 802.11ah-compliant reference signals, you can download them to a variety of Keysight instruments.

    Reply
  3. Tomi Engdahl says:

    Broadcom targets terabit switch markets with fat StrataDNX chipset
    It’s a mighty fine rack, veep tells us
    http://www.theregister.co.uk/2015/03/18/broadcom_stratadnx_chipset/

    Broadcom wants a bigger footprint in the router and switch market with the next iteration of its top-line silicon, the StrataDNX line.

    While conceding that merchant silicon isn’t going to push Cisco or Juniper Networks out of the network core where the biggest traffic flows exist, Broadcom reckons the ability to support beyond-Terabit throughput, huge buffers and highly configurable packet processing, will give StrataDNX will still find a home in data centre, large enterprise and service provider networks.

    The StrataDNX comprises two packet processors, Jericho and QumranMX, and the BCM88770 fabric device. The 3.6 Tbps-per-device BCM88770 drives the pitch into the medium-to-large chassis market for Broadcom.

    Velaga said this means users are “now able to deliver modular chassis delivering multi-terabits per slot, and single systems delivering over 100 Tbps”.

    The matching packet processors, Velaga said, will let OEMs deploy very similar capabilities across different systems without having to design across different silicon.

    Velaga said with top-tier vendors already sampling the chips, customers should expect to see products shipping within a year

    Reply
  4. Tomi Engdahl says:

    Is License-Assisted Access Needed?
    http://www.networkcomputing.com/wireless-infrastructure/is-license-assisted-access-needed/a/d-id/1319452?

    Some carriers are eyeing the use of LTE in unlicensed 5GHz spectrum as a way to keep up with skyrocketing capacity demands, but the technology isn’t an ideal solution.

    The unquenchable thirst for more wireless speed and capacity is a major problem for mobile network operators today. As the latest 802.11ac wave 2 WiFi technology approaches theoretical speeds in excess of 1Gbps, LTE technologies are quickly falling behind.

    One solution that some carriers are looking into is the idea of leveraging unlicensed spectrum in the 5 GHz space — the same spectrum where 5GHz WiFi resides today. The technology is known as License-Assisted Access (LAA) or LTE-Unlicensed (LTE-U), and it’s being backed by multiple wireless vendors and carriers around the world.

    But just because the technology is gaining in popularity with mobile carriers doesn’t mean it’s a good thing for consumers or enterprise IT departments. So the question is, will LAA-LTE solve the problem it was intended to fix?

    An organization known as the 3rd Generation Partnership Project (3GPP) is the primary group spearheading LAA development.

    In other words, use small LAA cells that adhere to unlicensed transmit power/gain restrictions and backhaul that traffic over privately licensed LTE spectrum. This essentially clears up a great deal of congestion on carriers’ licensed spectrum by offloading edge devices to the unlicensed 5 GHz space.

    My first reaction is that this is a terrible idea that will further crowd an already crowded 5GHz unlicensed wireless spectrum. WiFi and LAA are also two completely different protocols and there is major concern that LAA won’t play nice in regard to sharing the waves with WiFi networks

    Additionally, current LAA technologies can’t overlap in the same wireless spectrum. That means only one carrier can be present in any given physical location.

    If we give LAA and its spectrum sharing capabilities time to mature, it’s possible that we could have a time where WiFi and multiple carriers using LAA can coexist in the same spectrum with out stepping on each other’s toes. But keep in mind that this would only be for a fixed period of time. And each time a new WiFi or LAA technology gets updated, there would be extra work involved to ensure that both technologies could cohabitate. Ultimately, this means extra time spent certifying new specifications, which would ultimately slow down the standardization and deployment of both protocols.

    T-Mobile is the first US-based mobile carrier to announce their plans to launch LAA sometime this year, but having a network ready to go means nothing without mobile devices that contain LAA-compatible wireless radios.

    It’s clear why mobile operators want to push forward with LAA..

    Overall, considering the proliferation of private, public and even mobile carrier-grade WiFi networks, I don’t see a tremendous need for LAA at this point in time. That may very well change in the future.

    Reply
  5. Tomi Engdahl says:

    RCS will replace text messages

    Traditional SMS and MMS are disappearing. OTT, or over-the-top is a dirty word for many operators. It is used for operator services offered by the web on the tube, usually with free applications. This will eventually lead to a decline in the traditional text, and I spent a large notch operator inputs.

    SMS, or text messaging service was for many years an essential part of mobile phone use to consumers, and at the same time, it was a vital source of income for operators. However, change is permanent and because the basic mobile phone has largely replaced the palm-sized computer, which is known as smartphone, Internet-based instant messaging services have become common in the industry norm.

    SMS and MSM MMS (Multimedia Messaging Service) limit in any case blurred when the various message applications have changed the way people use messaging services. Operators hope they can replace the existing text and multimedia message systems, as well as WhatsApp and Facebook messages, with RCS (Rich Communication Services).

    RCS stands for Rich Communications Service. It is an IP-based service, which will replace the current SMS and MMS messages, as well as that of their users in addition to a lot of new features.

    RCS provides standard SMS and MMS message features, in addition, it supports two users and between the Group’s internal instant messaging (IM, Instant Messaging), as well as files, and video-sharing over the network. The subscriber’s point of view, the RCS will focus on the use of improved address book. It is the user’s contacts, consisting of a library, which follow from the network and the second device. RCS can show visually what all services contact will be able to participate. The functions look similar to, say, Instagram or WhatsApp, but the RCS provides some respects better services. Users can, for example, to change the media in a single message session.

    When services such as Facebook or Instagram is used in mobile phone internet connection over, they are called OTT services (over the top), with the Internet data packets to be transferred to the operator’s network. The RCS is practically OTT the operator’s own service in which the bits typically moving LTE network element of an IMS (IP Multimedia Subsystem). Marketing reasons, RCS has been given the brand name of Joyn ( http://www.joynus.com ).

    In 2011, the launch of RCS updated version of the RCS-e, which is intended for the rapid commercialization of services. Although the RCS is a standard based platform, the introduction includes problems such as the all new online services.

    RCS advantage of the IMS core system, which originally was part of the 3G specifications. This service platform to implement, for example, authentication, authorization, registration, billing and routing.

    RCS has received support from a wide suppliers of ecosystems. This reduces the need for operators to develop their own by a specific RCS implementations.

    Such as voice calls and SMS messages, RCS terminal software can be installed on a cell phone during the manufacturing process. RCS is activated and the user is automatically detected when the phone is switched on and the SIM card is registered to the network.

    The Client RCS Getting started is easy. For the operator this is a complex service which must be secured to an existing network. This requires careful testing before the introduction of the live network. RCS status of the industry-wide standard does not actually guarantee that it will work seamlessly with each 3GPP-compatible network.

    At some point, the network operator wants to take a new RCS services to subscribers. Prior to that, the operator must verify that the live service provides enough high-quality user experience, that it does not slow down or undermine other online services, and that the RCS to maintain the security and privacy of users.

    Sources:
    http://www.etn.fi/index.php?option=com_content&view=article&id=2575:rcs-korvaa-tekstiviestit&catid=13&Itemid=101
    http://www.etn.fi/index.php?option=com_content&view=article&id=2574:rcs-tekstiviestin-korvaaja-vaatii-tiukkaa-testaamista&catid=13&Itemid=101

    Reply
  6. Tomi Engdahl says:

    T-Mobile promises rates won’t rise, now paying Verizon Edge & AT&T Next bills for switchers, more
    http://9to5mac.com/2015/03/18/t-mobile-promises-rates-wont-rise-now-paying-verizon-edge-att-next-bills-for-switchers-more/

    Today during its latest Un-carrier event, T-Mobile announced new initiatives including brand new plans and pricing for businesses and a promise to all of its customers that it won’t increase prices. It’s also paying off bills for those locked into leasing plans on other carriers if they switch to T-Mobile.

    That “Un-contract” guarantee that prices won’t increase goes for all of its Simple Choice plans (including Simple Choice promotional plans) as long as you remain a T-Mobile customer with a qualifying plan. However, the carrier will only offer the guarantee to those with unlimited 4G LTE data for two years.

    Another initiative announced by the carrier today dubbed “Carrier Freedom” will pay up to $650 in outstanding payments for customers switching to T-Mobile.

    Reply
  7. Tomi Engdahl says:

    Full-Duplex Radio Integrated Circuit Could Double Radio Frequency Data Capacity
    http://tech.slashdot.org/story/15/03/18/1733245/full-duplex-radio-integrated-circuit-could-double-radio-frequency-data-capacity

    Full-duplex radio communication usually involves transmitters and receivers operating at different frequencies. Simultaneous transmission and reception on the same frequency is the Holy Grail for researchers, but has proved difficult to achieve.

    Full-duplex radio integrated circuit could double radio frequency data capacity
    http://www.gizmag.com/full-duplex-wireless-radio-integrated-circuit/36580/

    Simultaneous transmission and reception on the same frequency is the Holy Grail for researchers, but has proved difficult to achieve. Those that have been built have proven complex and bulky, but to be commercially useful in the ever-shrinking world of communications technology, miniaturization is key. To this end, engineers at Columbia University (CU) claim to have created a world-first, full-duplex radio transceiver, all on one miniature integrated circuit.

    Implemented at the nanoscale on a CMOS (Complementary Metal Oxide Semiconductor) integrated circuit (IC), the new device from the CU team enables simultaneous wireless radio transmission and reception at the same frequency. Dubbed CoSMIC – for Columbia high-Speed and Mm-wave IC – the team believes that its transceiver could help revolutionize mobile communications technology.

    “This is a game-changer,” said Associate Professor Harish Krishnaswamy of CU’s Fu Foundation school of Engineering and Applied Science. “By leveraging our new technology, networks can effectively double the frequency spectrum resources available for devices like smartphones and tablets.”

    To achieve this reported full-duplex capability, the team needed first to cancel out the transmitter’s echo on the frequency so that the minute “whisper” of the received signal could be heard.

    “Transmitter echo or ‘self-interference’ cancellation has been a fundamental challenge, especially when performed in a tiny nanoscale IC, and we have found a way to solve that challenge.”

    The researchers are not actually the first to produce a full-duplex radio system, nor are they the first to use the analogy of the whisperer/shouter. Stanford University produced a system using ostensibly similar techniques, and explained its system using exactly the same premise as the Columbia team.

    However, what Columbia is first at is the miniaturization of the full-duplex transceiver onto an integrated circuit.

    “Our work is the first to demonstrate an IC that can receive and transmit simultaneously,”

    New Technology May Double Radio Frequency Data Capacity
    http://www.engineering.columbia.edu/new-technology-may-double-radio-frequency-data-capacity-0

    So the ability to have a transmitter and receiver re-use the same frequency has the potential to immediately double the data capacity of today’s networks. Krishnaswamy notes that other research groups and startup companies have demonstrated the theoretical feasibility of simultaneous transmission and reception at the same frequency, but no one has yet been able to build tiny nanoscale ICs with this capability.

    “Our work is the first to demonstrate an IC that can receive and transmit simultaneously,” he says. “Doing this in an IC is critical if we are to have widespread impact and bring this functionality to handheld devices such as cellular handsets, mobile devices such as tablets for WiFi, and in cellular and WiFi base stations to support full duplex communications.”

    The biggest challenge the team faced with full duplex was canceling the transmitter’s echo. Imagine that you are trying to listen to someone whisper from far away while at the same time someone else is yelling while standing next to you. If you can cancel the echo of the person yelling, you can hear the other person whispering.

    This work was funded by the DARPA RF-FPGA program.

    Comments from http://tech.slashdot.org/story/15/03/18/1733245/full-duplex-radio-integrated-circuit-could-double-radio-frequency-data-capacity

    The article is misleading. Transmission and reception on the same “frequency” is done today. However, there’s some other “discriminator” in the signal. Either modulation method, phase, shift, orientation, or “something” is different so that the receive and transmit don’t collide.

    This article — despite its misleading introduction — talks about a limited application whereby RX and TX can occur using the same frequency *BAND* (they say “spread spectrum”) and allow full-duplex communication. The advance is that this is all on one chip.

    What would be truly revolutionary, like the example of two people talking to each other at the same time, is the ability to transmit and receive using the *same* exact method by both transceivers. THAT would be the holy grail.

    Not there yet.

    The issue is that a strong transmission in the same band as a receiver can desense the receiver. This can also be done with a cavity duplexer if you need input and output in the same band on adjacent frequencies, but you pay for it with geometric space (since cavity duplexer dimensions are a fraction of the wavelength in free space multiplied by the materials velocity factor). This can be problematic on HF and VHF bands, but UHF and microwave can get away with duplexers the size of a brick. Unfortunately, that’s still too much for mobile phones since it’s too big to fit in someone’s pocket.

    Hi all, I was perusing through all the comments, and as one of the authors of the work, I thought I would clarify some of the points that were raised to aid the discussion:
    1. The chip targets same-channel full duplex, meaning the transmitter and the receiver work in the same frequency channel at the same time, and are not separated by polarization, modulation format etc. Therefore, since transmitted signals are around +20dBm and receiver sensitivity levels are around -90dBm, nearly 110dB of suppression through isolation (across a pair of antennas or a circulator) and echo (aka self-interference or SI) cancellation must be achieved (as one of the people above has correctly pointed out). Such a high degree of SI cancellation requires that SI cancellers be implemented in all domains (RF, analog and digital, each yielding a part of the total SI suppression).
    2. As one of the people above has pointed out, even if the signals were separated in modulation format for instance, the transmitter SI would be so powerful that it would saturate the receiver front end before modulation-format-based separation can be achieved in the digital domain. So echo cancellation at the receiver front end is required.
    3. As someone points out, circulators and echo cancellers have existed for quite a while and have been implemented in many ways. The innovation here is that we perform echo or SI cancellation at RF in a single chip, which has not been done before.
    4. Moreover, the SI cancellation approach can tackle echos that experience significant delay (as high as 20ns) while still fitting with an IC form factor through the use of on-chip reconfigurable high-Q filters, enabling cancellation of wideband signals (>20MHz enabling use for WiFi).
    5. Finally, indeed the varying environment is a challenge and the RF and digital SI cancellers need to be reconfigured periodically (milli-seconds).
    Hope this helps.

    Some more clarifications:
    6. The chip has been fully tested, and is able to provide the required SI cancellation so that the desired signal can be received without distortion in the presence of the powerful transmitter echo. What remains to be tested are rate gains when several of these chips are networked. This is not that straightforward because today’s networks are designed for half-duplex nodes, not full-duplex. So new scheduling concepts etc. need to be developed, which is a topic of research.
    7. Echo cancellation is certainly not old technology. While echo cancellation techniques exist, they use techniques that cannot be integrated into an IC (e.g. cm-long transmission lines to replicate 10s of nanoseconds of delay spread, photonic techniques etc.). The innovation here is a technique that can replicate the delay spreads of the echo at RF frequencies on an IC.

    Reply
  8. Tomi Engdahl says:

    HP:
    What’s Next for China? The Chinese Telecom Industry’s Effect on its Economy — With China’s scale and ceaseless drive to innovate, telecom tech could be key to the nation’s continued growth as the world’s leading financial market.

    What’s Next for China? The Chinese Telecom Industry’s Effect on its Economy
    https://ssl.www8.hp.com/hpmatter/issue-no-4-spring-2015/whats-next-china-chinese-telecom-industrys-effect-its-economy

    With China’s scale and ceaseless drive to innovate, telecom tech could be key to the nation’s continued growth as the world’s leading financial market.

    Depending on how you crunch the numbers, China has surpassed the United States as the world’s largest economy, or is very close to doing so. Now it faces the so-called middle-income trap—the precarious step that separates economies based on cheap labor from those fueled by added value. That’s why the telecom industry, one of the country’s fastest-growing sectors, could play a key role in China’s larger economic future.

    China’s Mobile Market

    With close to 1.3 billion mobile-phone owners and 700 million Internet users, China is the most populous digital-telecom market in the world. But scale alone is only part of the story. In per capita terms, telecommunications spending is relatively low in China. Despite all those Internet users, for example, China only narrowly surpassed the U.S. market of 277 million users in electronic retail spending in 2013 ($295 billion to $270 billion, according to a study by the McKinsey Global Institute). And China’s total Internet economy, worth some $407 billion, was just over half the size of the U.S.’s, valued at $721 billion.

    The Growth Cycle

    Take, for example, China’s smartphone and tablet markets. In 2015, the number of operational devices in China is expected to exceed 900 million—an impressive step up from 700 million in 2014 and a quantum leap from 380 million in 2013, according to the McKinsey report.

    The growth rate may be slowing, but it has triggered a boom for mobile apps and advertising. This market will double in size by 2018 (from $7.1 billion in 2014 to $15.7 billion), according to research by the London-based analysis firm IHS.

    New Thinking

    Traditionally, Chinese tech firms are famous for imitating foreign technology. There are independent Chinese versions of Facebook (Renren) and Twitter (Weibo). Even the Chinese tech firm Xiaomi has been dubbed, fairly or unfairly, “China’s Apple.” But in recent years, Chinese consumers are setting their own trends and Chinese firms are introducing their own innovations.

    Challenges

    Scale and ambition can move economic mountains. But China’s climb toward continued, sustainable tech growth is not without friction. The two biggest obstacles in telecom may prove to be online censorship and data security.

    Reply
  9. Tomi Engdahl says:

    Content Barons, Smart Dust & SkyNet: 6 Telecommunications Disruptions for 2020
    Technical innovation will be critical to the telecom industry over the next five years.
    https://ssl.www8.hp.com/hpmatter/issue-no-4-spring-2015/content-barons-smart-dust-skynet-6-telecommunications-disruptions-2020

    The six major disruptions that will drive the most change in Telecommunications by 2020 are:

    Integration
    Thingification
    Mobility
    Saturation
    Security
    Ascension

    1.Integration
    As I pondered the coming half-decade for the Telco industry, perhaps the most obvious disruption that has appeared is that of Vertical Integration. Communication, or more precisely connectivity, has quickly become a utility. It is no less critical to modern living than water or electricity
    This leads to several interesting and predictable results. First and foremost is the commoditization of connectivity. Being connected continues to become cheaper and cheaper, adhering rather slavishly to Moore’s Law of diminishing costs. Ever-increasing capacity and diminishing marginal utility when combined with just a little bit of competition, leads to what is sometimes called the Pricing Death Spiral. The cost of providing such a service keeps falling, and competition means that the price keeps getting smaller and smaller in a strong, negative feedback loop.

    2.IoT: The Traffic Explosion
    The next major trend that will impact Telecommunications is the explosion of connected devices. This Internet of Things, or Thingification, will add billions if not trillions of new connected data sources globally by 2020. Objects throughout our lives will become connected, aware and chatty, constantly transmitting information across our global networks.
    The upswing of all of these devices will be an astronomical growth in data volumes
    Most first generation “smart” objects aren’t really smart, they’re chatty. They use sensors to describe their situation, but they’re not really analyzing or thinking about that data; they just measure and transmit. As traffic explodes and as these devices evolve, we will embed more and more intelligence in these end-points.

    3.Mobility
    It shouldn’t come as a surprise to anyone that mobility is a huge trend in Telecommunications. In 2014, there were an estimated 1.2 billion people with mobile devices worldwide. By 2020, this number is forecasted to exceed 5 billion, or about 80 percent of humanity. (Interestingly, there will be more than 9 billion subscriptions, reflecting a large number of people with multiple accounts.) Global growth of mobile connectivity is far outpacing hardline connectivity.

    4.Saturation: The Search for Growth
    It is widely noted in the Telecommunications industry that while traffic has grown dramatically, per-customer revenues have remained relatively flat. Customers are demanding more and more bandwidth, yet the price is dropping at about the same rate as data growth. This shouldn’t come as an enormous surprise. Consumers view connectivity as a utility, with a given cost per month for use.

    5.Security: The Network is the Threat
    No discussion of the present or future of the Internet can ignore the issue of security. In 2013, one in five Americans had their identity stolen, and that number is increasing rapidly. In February 2015, Intuit, maker of Turbo Tax software, actually suspended sending tax returns to state tax authorities, as there were apparently millions of fraudulent tax returns being filed in the first week of tax season. Security used to involve setting up a firewall and then walking away. Things have changed, and dramatically so. Security is no longer about prevention and protection; the walls of the castle have all been breached. Security is now an issue of detection, intervention, and interception of threats that are ever-present.
    Encryption will become much more widely utilized, however with the expectation that network performance won’t suffer as a result. Virtual Private Network technology also will advance, making networks both more flexible and more secure.

    6.Ascension: Skynet Finally Gets Real
    With this last prediction I’m going to stick my neck out a bit. In the 1990s several companies attempted to bring broadband technology into orbit. Teledesic, Globalstar and Iridium all planned on deploying fleets of tens, hundreds, even thousands of satellites which would provide broadband networking to every place in the world. The vision behind these networks was bold, courageous, forward-thinking, and fatally flawed. While companies poured billions of dollars into building and deploying these networks, most of them never left the ground, and those that did have gone through repeated cycles of bankruptcy and restructuring.
    It was troubling to me that these networks in the sky, Skynets, failed because I actually worked on some of their designs back when I started my career as a satellite engineer. Their designs were very advanced, sophisticated, even elegant.
    There are obvious reasons why these satellite constellations wouldn’t work. Launch costs were enormous. Teledesic would have required hundreds of launches at $100 million a pop. The networks weren’t really useable until the entire network was in place, which would require years of launches to complete. The satellites themselves had useful lives of only a handful of years
    Nonetheless, I’m predicting that Skynet 2.0 is about to reappear. These space-, balloon-, or drone-based systems will provide high-quality broadband access to anywhere and everywhere in the world, they’ll do it affordably, and they’ll likely start arriving around 2020. And this time, they’ll be wildly successful.

    Reply
  10. Tomi Engdahl says:

    The Cloud & Your Connection: How NFV is Revolutionizing Telecommunications
    https://ssl.www8.hp.com/hpmatter/issue-no-4-spring-2015/cloud-your-connection-how-nfv-revolutionizing-telecommunications

    For Communications Service Providers, servers and mainframes have given way to the cloud. NFV, Network Functions Virtualization, is what’s making it all possible.

    Reply
  11. Tomi Engdahl says:

    One API to rule them all: The great network switch silicon heist
    Protocols? Pfff, don’t make us laugh
    http://www.theregister.co.uk/2015/03/20/datacenter_api/

    Microsoft, Dell and Facebook are among a group of vendors who have come together with data centre operators to develop software intended to abstract chunks of network silicon (switches, typically) from the network operating system that runs on them.

    The first implementations of the specification for this Switch Abstraction Interface (SAI) were showcased at the recent Open Compute Project Summit in March 2015.

    SAI is a specification to provide a consistent programming interface for common networking functions implemented by network switch Application-Specific Integrated Circuits (ASICs).

    The intention here is for network switch vendors like Brocade, Cisco, Netgear and others to be able to build “new and innovative features” through extensions that come back to this consistent programming interface and therefore attract a wider base of users.

    Equally, software application developers programming network-level software will be able to work with a more interoperable outlook and therefore customise more openly.

    To put it in simple terms, the Switch Abstraction Interface is a network-level Application Programming Interface (API). If we accept that APIs exist to form vital communications bonds between different software code elements and data streams, then a network API has the job of connecting the operating system to the network switches. It’s the same kind of thing, just deeper down.

    Creating a more open API for this type of task means, in theory, that the network operating system should be able to control switch behaviour irrespective of what type of system of protocols and silicon base it is running on, without the need for conversion codes.

    Software developers don’t want to care about whether their code runs on an Intel chip or an AMD chip. Ultimately they don’t want to care about whether their code runs on Windows, Linux, Apple OS X or mobile. By the same rule then, network software engineers don’t want to care about which switch protocols they need to code to. A more open API to rule them all works better all round.

    Reply
  12. Tomi Engdahl says:

    Cable tie eliminates need for clamps and other tools
    http://www.cablinginstall.com/articles/2015/03/southwire-cable-tie.html?cmpid=EnlContractorMarch192015

    Southwire recently announced the addition of a Cable Tie to its wire-accessory product set. “This new patent-pending cable tie is simply the most versatile cable tie on the market today,”

    ” Its patent-pending design eliminates the need to carry wire clamps, hose clamps, loop clamps and countless other items needed to get the job done”

    The cable ties now offered by Southwire feature “convenient pass-through slots for securing the cable tie, if needed, without splitting the tie itself, as often occurs with traditional ties,”

    Reply
  13. Tomi Engdahl says:

    Guest blog: How to field test for PIM
    http://www.cablinginstall.com/articles/2015/03/commscope-guest-blog.html?cmpid=EnlContractorMarch192015

    In just a few short years, passive intermodulation (PIM) has gone from a vaguely understood but accepted nuisance to a major concern that wireless service providers seek to manage and minimize.

    PIM’s rise in importance coincides with the increasing complexity of today’s wireless networks, including the use of higher orders of modulation and more frequency bands. As wireless service providers add the most recent 4G/long-term evolution (LTE) capabilities to their networks, the incidence and effects of PIM on performance and profitability are on the rise.

    In the testing lab and among RF engineers, PIM is a key concern. Wireless service providers have been vigilant about establishing more stringent PIM standards. Many system vendors have created proactive processes and testing procedures to ensure these standards are being met, if not exceeded.

    When it comes to field-testing by installers and services technicians, the awareness of PIM and how to properly detect it may not be as strong as it needs to be. Field testing for PIM introduces a number of additional variables that, if not properly accounted for, may result in wide-ranging discrepancies and inaccurate readings.

    a new white paper called
    PIM Testing: Advanced wireless services emphasize the need for better PIM control
    http://info.commscope.com/2014GatedAssetts_PIMTestingWhitepaper.html?utm_source=blog&utm_medium=socialmedia&utm_campaign=blogging

    Reply
  14. Tomi Engdahl says:

    Is License-Assisted Access Needed?
    http://www.networkcomputing.com/wireless-infrastructure/is-license-assisted-access-needed/a/d-id/1319452?

    Some carriers are eyeing the use of LTE in unlicensed 5GHz spectrum as a way to keep up with skyrocketing capacity demands, but the technology isn’t an ideal solution.

    The unquenchable thirst for more wireless speed and capacity is a major problem for mobile network operators today. As the latest 802.11ac wave 2 WiFi technology approaches theoretical speeds in excess of 1Gbps, LTE technologies are quickly falling behind. Part of this has to due with pure logistics in terms of upgrading an entire LTE network in a timely fashion. But much of the problem rests on the fact that more and more mobile devices are using the same, constricted licensed spectrum.

    One solution that some carriers are looking into is the idea of leveraging unlicensed spectrum in the 5 GHz space — the same spectrum where 5GHz WiFi resides today. The technology is known as License-Assisted Access (LAA) or LTE-Unlicensed (LTE-U), and it’s being backed by multiple wireless vendors and carriers around the world.

    Reply
  15. Tomi Engdahl says:

    Jon Brodkin / Ars Technica:
    Trade group led by AT&T and Verizon sues FCC to overturn net neutrality

    Trade group led by AT&T and Verizon sues FCC to overturn net neutrality
    FCC calls early lawsuit “premature and subject to dismissal.”
    http://arstechnica.com/tech-policy/2015/03/trade-group-led-by-att-and-verizon-sues-fcc-to-overturn-net-neutrality/

    The Federal Communications Commission’s new net neutrality rules haven’t taken effect yet, but they’re already facing lawsuits from Internet service providers.

    One such lawsuit was filed today by USTelecom, which is led by AT&T, Verizon, and others. Another lawsuit was filed by a small Internet service provider in Texas called Alamo Broadband. (The Washington Post flagged the lawsuits.)

    The net neutrality order, which reclassifies broadband providers as common carriers and imposes rules against blocking and discriminating against online content, “is arbitrary, capricious, and an abuse of discretion,” USTelecom alleged in its petition to the US Court of Appeals for the District of Columbia Circuit. The order “violates federal law, including, but not limited to, the Constitution, the Communications Act of 1934, as amended, and FCC regulations promulgated thereunder.” The order also violates notice-and-comment rulemaking requirements, the petition said.

    Reply
  16. Tomi Engdahl says:

    Andrea Peterson / Washington Post:
    FTC replaces Mobile Tech Unit with Office of Tech Research and Investigation to tackle broader issues like privacy, security, big data, payments, IoT

    The FTC beefs up technology investigations with new office
    http://www.washingtonpost.com/blogs/the-switch/wp/2015/03/23/the-ftc-beefs-up-technology-investigations-with-new-office/

    The Federal Trade Commission is already the de facto government watchdog for digital privacy. Now, the agency is hiring more people to investigate how technological advancements affect consumers.

    “Today, I am pleased to announce the Bureau of Consumer Protection’s newest initiative to help ensure that consumers enjoy the benefits of technological progress without being placed at risk of deceptive and unfair practices – the formation of BCP’s Office of Technology Research and Investigation,”

    As technology has become an increasingly large part of peoples’ lives, the FTC’s consumer protection powers left it in the position of policing big tech companies. But that job requires a set of highly technical skills, resulting in the creation of the chief technologist position at the FTC in 2010

    Reply
  17. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    Facebook open-sources Augmented Traffic Control, a Wi-Fi tool for simulating 2G, Edge, 3G, and LTE networks
    http://venturebeat.com/2015/03/23/facebook-open-sources-augmented-traffic-control-a-wi-fi-tool-for-simulating-2g-edge-3g-and-lte-networks/

    Facebook today open-sourced Augmented Traffic Control (ATC), a Wi-Fi tool for testing how mobile phones and their apps handle networks of varying strength, over on GitHub. ATC simulates 2G, Edge, 3G, and LTE networks, and allows engineers to switch quickly between various simulated network connections.

    ATC came into existence after years of attempts, mostly at Facebook hackathons (starting in January 2013), to create test network conditions that simulate what users experience in the real world. Since Facebook wants as many people as possible to access its services at their full potential, it follows that the company should be able to test on wireless connections that more accurately reflect those that many ultimately use.

    Augmented Traffic Control: A tool to simulate network conditions
    https://code.facebook.com/posts/1561127100804165/augmented-traffic-control-a-tool-to-simulate-network-conditions/

    We created Augmented Traffic Control with open source technology, building on the work of others. We want to give the open source community the same chance to improve on our ideas and innovate with their own — so today we are open-sourcing our design for Augmented Traffic Control on GitHub.

    https://github.com/facebook/augmented-traffic-control

    Reply
  18. Tomi Engdahl says:

    Optical Specs Plug Into Data Center
    Microsoft, vendors to define board interfaces
    http://www.eetimes.com/document.asp?doc_id=1326102&

    Optical communications will take a step closer to server and switch motherboards thanks to a new alliance that will develop interconnect standards. Microsoft has rallied a group of 14 data center vendors to form the Consortium for On-Board Optics (COBO) that hopes to release its first specifications in a year.

    COBO will “define electrical interfaces, management interfaces, thermal requirements and pin-outs…[for] optical modules that can be mounted or socketed on a network switch or adapter motherboard,” according to a press statement from the group. It is initially expected to focus on 100 and 400G links.

    “The goal is to bring the goodness of faceplate pluggable optics like SFP+ and QSFP+ to the on-board optics market,” said Brad Booth, COBO Chair and principal architect for Microsoft’s Azure Global Networking Services, in an email exchange with EE Times.

    Bringing optical connections to the board helps switch makers break through current limits of how many optical ports can fit on the front panel of a system. “This will permit system OEMs to mount the optical modules in the same manner that they mount switch ICs and in a location that benefits power consumption and heat dissipation,” Booth said.

    Currently vendors use a variety of proprietary formats for the so called on-board or embedded optics. The COBO specs aim to create a level playing field in which data center operators can choose interchangeable modules from multiple companies.

    “The overall market for such optical modules is relatively small, but this initiative has the potential to increase interest,”

    The COBO spec will cover a variety of technologies. They include VCSEL-based boards such as those made by Finisar and emerging designs using silicon photonics such as those from Mellanox and the members of its Open Optics alliance supporting a version of silicon photonics based on wavelength division multiplexing.

    “The combination of open electrical and optical specifications holds the promise of an on board optical module that can provide connectivity to an entire rack of servers through a single fiber with the flexibility to choose the best and most cost-effective technology from a choice of interoperable vendors,”

    Reply
  19. Tomi Engdahl says:

    Ethernet Masquerading as Chaos
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326087&

    The chairman of the 400 Gbit/s Ethernet effort describes its status in the context of all four major Ethernet standards efforts in the works.

    From a 30,000 foot view one might see Ethernet masquerading as chaos, as there is expectation of four new speeds of Ethernet to be introduced in the second half of this decade.

    Please consider that it took the IEEE 802.3 Working Group 12 years to introduce the last four speeds, starting with 1Gb/s Ethernet (GbE) in 1998, 10GbE in 2002, and 40GbE and 100GbE in 2010.

    It is anticipated that the IEEE 802.3 Ethernet Working Group will have formal projects in the near future simultaneously addressing new speeds: 2.5GbE, 5GbE, 25GbE, and 400GbE, while also working to introduce other lower speeds of Ethernet into automotive applications and across plastic optical fiber.

    As I said, Ethernet is masquerading as chaos. We, as an industry, often talk about the Ethernet ecosystem or as a singular industry. Nothing could be further from the truth. Ethernet is a common frame format that can be transmitted across a host of mediums to support a host of applications.

    To those seeking to hold onto the Ethernet of yesteryear, where you could count on Ethernet to increase by a factor of 10 after so many years – sorry – that time has passed. Over the past 15 years the Ethernet community has developed a wealth of knowledge and signaling technologies that can be leveraged to address the automotive, wireless access point, and high-volume server markets, which translates into new opportunities for the entire ecosystem.

    Let’s dive a little deeper into the development efforts for 400GbE.
    The architecture is a hybrid approach of the 10GbE and the 40GbE/ 100GbE architectures. All electrical interfaces are retimed and based on the 40GbE / 100GbE approach.
    This approach (shown above) will allow future 400GbE physical layer specifications to easily create new Physical Coding Sublayers (PCS), if necessary.
    There are two variants of the CDAUI electrical interface, one for interconnecting chips and one for chip-to-module interconnects.
    There are four reach objectives for this project: 100m over multimode fiber, and 500m, 2km, and 10km over single-mode fiber.
    To date the only proposal selected, highlighted in green, is to satisfy the 100m multi-mode fiber objective. his approach will leverage the same MMF technology developed as part of the IEEE Std 802.3bm-2015 specification, and will use 16 fibers in each direction.
    The single mode fiber proposals for 400GbE are all highlighted in yellow, and none, to date, have been selected. Three different modulation schemes have been proposed: NRZ, PAM-4 or DMT.
    All of the proposals for the 2km and 10km objectives are based on duplex fiber approaches.

    The simple reality is we are seeing the Great Diversification of Ethernet, and we will see 2015 be another year in the development of a multitude of new Ethernet specifications.

    Reply
  20. Tomi Engdahl says:

    First figures in and it doesn’t look good for new internet dot-words
    Domain name industry holds breath as renewal cycle comes around
    http://www.theregister.co.uk/2015/03/24/first_figures_in_and_it_doesnt_look_good_for_new_internet_extensions/

    Reply
  21. Tomi Engdahl says:

    Google Just Fixed One of Wi-Fi’s Biggest Annoyances
    http://www.wired.com/2015/03/google-android-broken-wifi/

    Stop me if this sounds familiar (if it doesn’t, you’re not paying close enough attention). You walk into a coffee shop, where your phone hungrily gloms onto the open Wi-Fi network—probably called something like Netgear, or AT&TWiFi—and promptly stops working. The shop forgot to update the router’s firmware, or just bought a budget hardware in 1997 thinking, “internet’s internet!” and never upgraded again. Either way, your phone just invisibly sabotaged its own connectivity.

    It’s a small, but consistent (and infuriating) problem, and Google has finally taken steps to solve it. With the new Android 5.1 update, which began rolling out yesterday in the U.S., your phone will remember which networks you attempt to connect with have crappy Wi-Fi, and save you from ever hopping on their bandwidth again.

    A few years ago, your phone’s Wi-Fi-hopping strategy made sense. Your 3G network was probably painfully slow, perpetually overloaded, and generally battery-crushing. You also probably had an adorably small data cap, measured in megabytes.

    AT&T even made an app for Android phones that would tell you where you could find WiFi—and it was super popular! The networks offered by cable companies, fast-food joints, and one candy store

    Today, though, LTE speeds are fast and efficient enough that you’re rarely better off on the sponsored connection at the train station. There’s almost never a good reason to keep stalling out on the same dud Wi-Fi networks. Android 5.1 spares you that agony, while still defaulting to reliable Wi-Fi networks when they’re available. Magic.

    Reply
  22. Tomi Engdahl says:

    Russell Brandom / The Verge:
    Firechat working on bluetooth-based Greenstones, hardware widgets to strengthen mesh networks on mobile

    Meet the off-the-grid chatroom that fits in your pocket
    Building a local network, one necklace at a time
    http://www.theverge.com/2015/3/23/8267387/firechat-greenstone-mesh-network-bluetooth-wifi-peer-to-peer

    What if instead of connecting to the phone company, we connected directly to each other’s phones? Called “mesh networking,” the idea has been kicked around for years in hacker circles, but it got a major boost with iOS 7. Thanks to the new “Multipeer Connectivity Framework,” iPhones were able to connect to each other directly over Bluetooth or Wi-Fi, a minor software update with potentially major consequences.

    Firechat was one of the first apps to seize on the new feature, building a mesh-enabled chat app that drew a surprisingly large user base. The app recently reached 5 million users, popping up at music festivals and protests around the world. For protests, it was a way to communicate without routing through potentially hostile carriers, holding out even in the face of an internet blackout. For everyone else, it was just a fun way to jump off the grid.

    But as the system has grown, it’s run up against a serious range problem. The iPhone has a lot less Wi-Fi strength you’d get from a router, and can’t reach nearly as far. Android phones are still stuck with Bluetooth for multipeer connections, which is even more limited.

    Now, we’re getting a look at Firechat’s answer. The company is developing a new hardware widget called Greenstone, designed to sit between phones, fill holes in the network, and go places that phones can’t go.

    The device also bridges over time, storing up to a thousand messages at once. If you need to send a message to a chat room that’s out of range, the Greenstone will store it locally until it can be delivered.

    That failsafe is important because the underlying network is so choppy. Greenstone is built entirely on Bluetooth, which has limited range and relies heavily on line of sight. If the device is in a crowd of moving people (like a concert), the device will be constantly hopping between connections, moving in and out of dead zones. You can wake the Greenstone up by shaking it (you’ll see a few lights go off), which will trigger a search for other Greenstones in the area.

    This kind of impromptu network is such a new idea that there isn’t really a word for what Greenstone is. (“Hub” doesn’t quite capture it, since the whole point of mesh is to avoid hubs.) It’s also still a prototype, and a long way from being ready for prime time.

    You won’t be able to buy a Greenstone any time soon, but that doesn’t mean you won’t see one in the wild. Benoliel has been testing the devices out at events — most recently at SXSW — and he’s toying with the idea of blanketing college campuses with them as a way to promote the Firechat app.

    Reply
  23. Tomi Engdahl says:

    Wall Street Journal:
    Telefonica announces deal to sell British telcom O2 to Hong Kong’s Hutchison Whampoa for £9.25B cash, with an additional £1B if cash-flow targets are hit
    http://www.wsj.com/article_email/telefonica-agrees-to-sell-o2-for-13-84-billion-1427217528-lMyQjAxMTA1NTIzNDUyNTQyWj

    Reply
  24. Tomi Engdahl says:

    32-fiber singlemode lensed MPO connector
    http://www.cablinginstall.com/articles/2015/03/32-fiber-sm-mpo-connector.html

    Sumitomo Electric Industries has developed a 32-fiber singlemode expanded-beam, lensed MPO connector, and will introduce the connector at the OFC exhibition being held March 24-26. When announcing the new connector, Sumitomo said, “Physical-contact (PC) connectors require careful endface cleaning before mating because the beam diameter at the fiber endface is only 0.01 mm [and] accordingly sensitive to minute debris. As fiber counts increase, pressing force must be increased to maintain PC connection for all fibers.” As a result of these characteristics, the ability to easily mate connectors may be impaired, and the connector’s mechanical reliability may suffer.

    “The newly developed singlemode multi-fiber lensed connector has flat surface lenses embedded at the end surface of the singlemode fibers, and expand the beam from the singlemode fibers,”

    Reply
  25. Tomi Engdahl says:

    Optical virtual switch platform targets data centers
    http://www.cablinginstall.com/articles/2015/03/coadna-ovs-datacenter.html

    Built upon CoAdna’s LightFlow wavelength selective switches (WSSs), the Gen-2 OvS platform can be used to create a distributed optical backplane that can interconnect up to 12 spine or aggregation switches with redundant full mesh interconnection and reconfigurable bandwidth. It also can support flattened-butterfly network architectures with simplified cabling and improved network performance. Up to 200 racks with 20 to 40 servers on each rack can be interconnected in such a network, CoAdna asserts. The Gen-2 OvS has a built-in OpenFlow 1.3 interface to support operation via software-defined network (SDN) principles.

    Reply
  26. Tomi Engdahl says:

    Optical transceivers promote 100GbE networks
    http://www.edn.com/electronics-products/other/4438989/Optical-transceivers-promote-100GbE-networks?_mc=NL_EDN_EDT_EDN_today_20150324&cid=NL_EDN_EDT_EDN_today_20150324&elq=8cb72af89e1f4f41ba1a4f5f7ed00bc4&elqCampaignId=22218&elqaid=24955&elqat=1&elqTrackId=1a2e739ddaa24935adf7038fa592b7c9

    Avago Technologies will showcase two fiber-optic transceiver modules at this month’s OFC 2015 conference that the company believes will enable broad adoption of 100GbE fiber links for data centers and accelerate the transition of data networks to 100G speeds to meet the increasing demand for bandwidth. The production-worthy AFBR-89CDDZ and AFCT-8450Z four-channel pluggable transceivers come in QSFP28 and CFP4 form factors, respectively.

    Compliant with IEEE 802.3 100GBASE-SR4 standards, the AFBR-89CDDZ QSFP28 transceiver is intended for 100GbE short-range link distances of up to 100 m using multimode fiber media.

    Reply
  27. Tomi Engdahl says:

    EU digital veep: If you like America’s radical idea of net neutrality, you’re in luck, Europe
    And if you don’t, just download some music and be happy
    http://www.theregister.co.uk/2015/03/25/andrus_ansip_net_neutrality/

    Europe’s digi-chief has spoken out about net neutrality rules emerging from America, and mobile networks favoring particular websites over others.

    Speaking at an event in Brussels this week, Digital Vice President of the Commission Andrus Ansip said he sees no difference between the EU and the FCC in the US in their approach to net neutrality. And the US is quite radical in this area.

    More surprisingly, Ansip said he “cannot see any big differences” between the European Parliament, the Commission and the council of national ministers on net neutrality.

    The three bodies have just begun negotiations on a raft of new telecommunications laws, but earlier this month the council redefined “specialised services” in a way that angered some MEPs.

    But Ansip seemed much more amenable to the idea. “It is difficult to define specialised services, because we don’t know what services we will have in five years or in 10 years, but we have to try. Although we are trying to define specialised services, but it is clear that within those rules the internet has to stay open – no blocking, no throttling.”

    The digi-veep also said he could live with some so-called zero-rated services – the practice whereby mobile operators do not charge for specified volumes of data from specific apps or used through specific services. For example, a mobile network could say music streamed from Spotify doesn’t count against a subscriber’s monthly download limit – a special offer that’s at odds with the principles of net neutrality.

    “This is not a black and white issue.”

    Reply
  28. Tomi Engdahl says:

    BT back to mobes with cheap, SIM-only swoop on broadband customers
    ‘Inside out’ 4G service stuck at ‘roadmap’ stage
    http://www.theregister.co.uk/2015/03/25/bt_mobile_sim_only_4g_data_offer/

    BT has tentatively returned to the mobile market it abandoned 13 years ago with a competitive SIM-only deal for its broadband customers.

    The one-time state monopoly – as expected – touted a 4G data, minutes and texts bundle this morning, with prices starting at £5 a month.

    BT’s offer weds existing broadband subscribers to a 12-month “bring your own phone” contract, with the sweetener of a 50 per cent discount on 4G tariffs, when compared with non-BT customers.

    Reply
  29. Tomi Engdahl says:

    Ofcom: ‘White space’ tech to support mobile data services in 2015
    Will help UK meet demand for data in ‘internet of things’ era
    http://www.theregister.co.uk/2014/10/13/ofcom_white_space_tech_to_support_digital_terrestrial_tv_in_2015/

    Updated “White space” wireless technology could be used for mobile data services as early as next year, Ofcom has said.

    The regulator said that trials of white space technology – which refers to the gaps that exist between radio spectrum bands already in use – will determine whether data can be transmitted using the gaps between radio frequency bands without disrupting the services already being delivered over existing spectrum.

    The white space that could be brought into use next year currently acts as a buffer between radio spectrum currently used to support digital terrestrial TV broadcasting.

    Ofcom said it hopes white space technology will be “rolled out during 2015″ following the conclusion of the current trials, further testing and the development of a specific policy on its use. Bringing white space technology into use would enable “the use of new wireless applications to benefit consumers and businesses across the country”, it said.

    Being able to exploit the gaps between radio spectrum would help the UK meet “the growing demand for data” in the “internet of things” era, the regulator said.

    Reply
  30. Tomi Engdahl says:

    OFC: Networks Cost Too Much
    http://www.eetimes.com/document.asp?doc_id=1326123&

    Los Angeles—”The cost of electrical-to-optical conversion is too high,” Pradeep Sindhu, Vice Chairman and Chief Technology Officer at Juniper Networks told OFC 2015 conference attendees at this year’s plenary session. “It must be crushed.”

    In his presentation, Sindhu explained that internet traffic is still growing at a rate of more than 50% per year. That puts pressure on service providers to keep up with demand but as they do, their profits shrink. Thus, data-communications networks will need significant changes.

    “Optics and electronics are the magic of networks,” he said, “but optical nodes and electrical links don’t make sense.” He sees links that today are electrical will need to become optical, pushing electrical signals farther back to the point where they’re needed mostly for switching and routing. He predicts that optics will replace electronics for transport distances longer than 2 m because “Copper has run out of gas.”

    Really? People have been predicting copper’s demise for years and yet it lives on. Remember when we thought we’d never see 10 Gbps over copper? Now 25 Gbps is in deployment and 56 Gbps is inevitable. Let’s see where we are in another five years.

    Sindhu also noted that today’s networks are constructed by laying IP (internet protocol) protocols on top of transport networks such as Ethernet. “That’s better than the three-layer networks we used to have when we also had a SONET layer, but it’s still too much.” That is, we’ll need a single-layer network where network elements consist of fast IP routers connected through optical links. “IP routers will be the dominant network elements at all scales,” he noted. He doesn’t, however see a rise in optical switching outside of special applications. He does see virtually all network connections being point-to-point connections carried over fiber.

    Even with added optics, light has a finite speed and that will force data centers to centralize where it makes sense to do so. Centralization of data centers, Sindhu explained, will provide an increase in network computing power for those at Google or Amazon, but it adds risk by creating a single point of failure. Therefore, centralization won’t make sense for IO intensive applications. He noted that the internet hasn’t failed because of its decentralized architecture.

    Reply
  31. Tomi Engdahl says:

    Charlie Warzel / BuzzFeed:
    Facebook is poised to become the gateway to the mobile internet by building an app ecosystem to rival that of Apple and Google — Facebook Is Eating The Internet — For years now there’ve been two competing versions of Facebook. It’s either an app on your phone or it’s your entire homescreen.

    Facebook Is Eating The Internet

    This version of Facebook is one where it is no longer just a single factor in our lives but the overarching context that consumes everything beneath it.
    http://www.buzzfeed.com/charliewarzel/facebook-is-eating-the-internet#.qxvgj7yx

    Reply
  32. Tomi Engdahl says:

    More than half of the households in Sweden could already be connected into an optical fiber connection.
    PTS in October, 54 per cent of households had a fiber connection along the way. The proportion could easily be increased to 75 per cent, with only one-quarter of households and businesses is further away from the fiber.

    61 per cent of households could get 100 Mbit/s internet connection at the moment. In October 2014 more than 38 per cent of households also paid for this fast connections.

    In the countryside fiber connection is pulled to 13 per cent of households – but the number of fiber connections is increasing quickly (by 44 per cent last year).

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2601:jo-puolet-ruotsalaisista-saisi-kuituyhteyden&catid=13&Itemid=101

    Reply
  33. Tomi Engdahl says:

    LTE-A Release 12 transmitter architecture: analog integration
    http://www.edn.com/design/wireless-networking/4438980/LTE-A-Release-12-transmitter-architecture–analog-integration?_mc=NL_EDN_EDT_EDN_analog_20150326&cid=NL_EDN_EDT_EDN_analog_20150326&elq=ee2178f3a4b74161b414f6f951b29e36&elqCampaignId=22255&elqaid=25002&elqat=1&elqTrackId=04196c0b43584ee4933943fe3a2703b0

    This two-part article series reviews new developments in the Fourth Generation Long Term Evolution (4G-LTE) cellular standard. The series explores LTE-Advanced (LTE-A) Release-12 (Rel-12) features and the impact on eNodeB radio frequency (RF) transmitters. The articles reveal how analog integration can overcome design challenges arising from the latest 4G developments.

    Reply
  34. Tomi Engdahl says:

    Does your broadband feel faster? Akamai says it went up 20 per cent*
    * On average, globally
    http://www.theregister.co.uk/2015/03/26/does_your_internet_connection_feel_faster_akamai_says_it_went_up_20_per_cent/

    Broadband speeds around the world increased by 20 per cent, year on year, according to internet tentacle monster Akamai.

    The website-caching giant claims in its latest State of the Internet report that, during the past three months to now, connections to ISPs peaked at 26.9Mbps, on average, dropping to 4.5Mbps the rest of the time – the latter figure up by a fifth compared to the same quarter a year ago.

    Over the past 12 months, 132 countries saw their average connection speeds increase. All but four countries (Sudan, Botswana, Yemen and Libya) boasted average speeds above 1Mbps.

    Once again, South Korea boasts the fastest connections in the world, with an average speed of 22.2Mbps. Hong Kong was second (16.8Mbps), followed by Japan (15.2Mbps), Sweden (14.6Mbps) and Switzerland (14.5Mbps). All according to Akamai’s figures.

    The US ranks just 16th in the world, averaging 11.1Mbps and 49.4Mbps peak.

    Reply
  35. Tomi Engdahl says:

    Big barrier to 5G cracked by full-duplex chippery
    It’s hard to send and receive radio in a small space, but boffins reckon they’ve cracked it
    http://www.theregister.co.uk/2015/03/26/full_duplex_transciever_for_mobiles_of_the_future/

    Their approach is to embed the interference-cancellation circuits in a wideband noise-cancelling, distortion-cancelling amplifier, in what they call a “noise and leakage cancelling receiver” (NLC-RX) that works between 0.3 to 1.7GHz on CMOS.

    As the university explains, standards like 4G/LTE already have a big spectrum footprint (with support for 40 frequency bands, Columbia notes). Dealing with the 5G means two things: greater expectations, and consequently even more pressure on spectrum.

    Interference Mitigation in Reconfigurable RF Transceivers
    http://www.ee.columbia.edu/~harish/interference-mitigation-in-reconfigurable-rf-transceivers.html

    Reply
  36. Tomi Engdahl says:

    Deepa Seetharaman / Wall Street Journal:
    Facebook’s solar-powered Internet drone, dubbed Aquila, with a wingspan of a Boeing 737 to begin testing this summer

    Facebook, Moving Ahead with Drone, Plans Test This Summer
    http://blogs.wsj.com/digits/2015/03/26/facebook-moving-ahead-with-drone-plans-test-this-summer/

    Facebook plans to test a version of its solar-powered drone this summer, a step in its efforts to beam Internet access to billions of people without it today, executives said on Thursday.

    Earlier this month, Facebook tested a smaller drone, about one-tenth the size of its planned solar-powered models. The full-size version will have the wingspan of a Boeing BA -0.04% 737 but only weigh as much as a small car.

    The drone — dubbed Aquila — is one aspect of Facebook’s Internet.org plan to extend Web access to what it estimates are 1.1 billion to 2.8 billion people without it today.

    Facebook and rival Google are experimenting with multiple technologies to reach people unlikely to be served by traditional landlines or cellular networks. In addition to drones, Facebook is evaluating satellite and other technologies. Google has its own drone program, is working on high-altitude balloons and has had a program to deliver Internet access from orbiting satellites.

    Facebook executives said the company is unlikely to get drones aloft and beaming Internet access any time soon. They cited the need to vet the drone’s safety and communication features as well as form partnerships with carriers.

    “We are working towards a real test flight this summer sometime,”

    Reply
  37. Tomi Engdahl says:

    The mysterious disappearance of duo-binary signaling
    http://www.edn.com/design/test-and-measurement/4438975/The-Mysterious-Disappearance-of-Duo-binary-signaling?_mc=NL_EDN_EDT_EDN_weekly_20150326&cid=NL_EDN_EDT_EDN_weekly_20150326&elq=f09d348dca0244d68527d4f82be2f014&elqCampaignId=22263&elqaid=25010&elqat=1&elqTrackId=ae92eb282c514765b4baf3af471aa7ed

    FCI/Ghent University/Alcatel-Lucent talk on duo-binary transmission

    At the jitter panel, we heard presumed experts say, “duo-binary isn’t enough,” without really giving us any quantitative justification for their comments or referring us to any pieces of work. At the duo-binary talk, however, we saw a very convincing presentation on the relative merits of duo-binary transmission for high-speed serial communication over typical copper channels. In fact, at the FCI booth, there was a working demonstration of reliable duo-binary 56 Gbps transmission over a real backplane/daughter card configuration. So, which is it? Is duo-binary a serious contender for the 28 Gbps and 56 Gbps serial communication market, or just a neat academic novelty with a few devoted followers, but no real market viability?

    What is duo-binary?
    Duo-binary signaling was, actually, invented in the 1960s and is just one example of a very old idea in data communication: partial response signaling. When using partial response signaling, we accept (or even introduce) some ISI (inter-symbol interference) in the signal at the Rx (receiver) input, which relaxes the channel’s bandwidth requirement. As long as we understand the precise nature of the ISI, we can remove it in the Rx, provided the circuitry needed to do so isn’t too complex, expensive, or slow.

    When designing for real-world channels, of course, we can never achieve the ideal response noted in the DesignCon 2015 paper.

    Despite the apparent advantages of duo-binary signaling, it appears to have fallen out of favor with the experts, who, over the last few years, seem to prefer other schemes for 28 Gbps and 56 Gbps transmission over copper media.

    Reply
  38. Tomi Engdahl says:

    OFC: Transceiver Module Spec Prevents Mismatching
    http://www.eetimes.com/document.asp?doc_id=1326143&

    This week at OFC 2015, the CDFP Multi-Source Agreement (MSA) announced the release of Rev. 3.0 of the 400 Gbps (16 x 25 Gbps) pluggable electrical-to-optical transceiver specification.

    The release includes mechanical specifications for a new Style 3 module that’s intended for active optical cables based on proposed draft specifications from the IEEE P802.3bs 400 Gbps Ethernet task force. Style 3 modules will use the same connectors as Style 1 and Style 2 modules, but Style 3 modules address an issue of mating with Style 1 and Style 2 receptacles. Style 3 modules will be mechanically keyed to prevent incorrect mating.

    CDFP electrical-to-optical interfaces support data rates of 25 Gbps over 16 lanes. That’s an aggregate data rate of 400 Gbps through a single module.

    Reply
  39. Tomi Engdahl says:

    Ericsson on Power: We Need To Rethink Structure
    Software defined power architecture on the horizon
    http://www.eetimes.com/document.asp?doc_id=1326139&

    Next-generation cellular communications will require a drastic reduction in energy consumption, something researchers at Ericsson are calling “ultra-lean transmission.”

    “Ultra lean transmission is not only the coolest thing we do for energy performance in 5G, but one of the coolest things we’ll do for energy performance at all,” Ylva Jading, a senior researcher at Ericsson told EE Times. “We need to separate how we think around control and data plane, and really make sure the control part becomes more scalable. This will become absolutely essential to get the lean framework for 5G,” he said.

    Macro base stations for 5G cellular, for example, will need to reduce power by a factor of 10. To do so while supporting increased use, engineers must focus on improving signal transmission. Rather than continuously transmitting signals once per millisecond in an “always on” format, a base station should transmit once per 100 milliseconds.

    This could be achieved by increasing sleep modes or using advanced beamforming techniques to reduce interference and focus energy on specific users, Jading said. Additionally, a focus on power management software can reduce much of the fixed cost associated with power modules.

    Reply
  40. Tomi Engdahl says:

    New homeowner selling house because he can’t get Comcast Internet
    “I accidentally bought a house without cable,” writes man who works at home.
    http://arstechnica.com/business/2015/03/new-homeowner-selling-house-because-he-cant-get-comcast-internet/

    One unlucky man who bought a house that can’t get wired Internet service is reportedly selling the home just months after moving in.

    “Before we even made an offer [on the house], I placed two separate phone calls; one to Comcast Business, and one to Xfinity,” Seth wrote. “Both sales agents told me that service was available at the address. The Comcast Business agent even told me that a previous resident had already had service. So I believed them.”

    That turned out to be untrue. After multiple visits from Comcast technicians, he says the company told him extending its network to his house would cost $60,000, of which he would have to pay an unspecified amount. But then Comcast allegedly pulled the offer

    Besides Comcast and CenturyLink, the Kitsap Public Utility District operates a gigabit fiber network that passes near Seth’s house, Consumerist wrote. “So why can’t he just get his service from the county? Because Washington is one of the half-dozen states that forbids municipal broadband providers from selling service directly to consumers,” the article said.

    The National Broadband Map lets you enter any address in the US to find out what Internet access options are available.

    “I’m devastated. This means we have to sell the house,”

    Reply
  41. Tomi Engdahl says:

    FCC Plans a Vote on New Airwaves Sharing Plan
    http://recode.net/2015/03/27/fcc-plans-a-vote-on-new-airwaves-sharing-plan/

    Federal regulators are set to vote next month on a plan to allow wireless carriers and companies including Google to share airwaves with the government, in an effort to make more airwaves available for future wireless devices.

    setting aside some frequencies for new Wi-Fi networks. It would open up airwaves now used mostly by military radar systems.

    It could be several years before consumers see any changes, but the move could make much more spectrum available for smartphones and future Internet of Things devices. While the airwaves aren’t really suitable for creating new long-range networks, they could be used to create smaller city-wide wireless broadband networks.

    Essentially, the government has developed an airwaves-sharing plan that would protect radar systems near military bases and the coastline while auctioning off access to the airwaves in other parts of the country. A portion of the airwaves would also be reserved for free use by anyone with an FCC-certified device that doesn’t create interference.

    The agency proposed the airwaves-sharing plan last spring, and the wireless industry and some tech companies have been arguing about the details ever since.

    Another issue involves which sorts of technologies can use the shared airwaves. Some wireless carriers are interested in using an “LTE-U”

    Reply
  42. Tomi Engdahl says:

    Nanolaser Enables On-Chip Photonics
    Using light to communicate instead of electricity
    http://www.eetimes.com/document.asp?doc_id=1326138&

    Sending communications signals around chips, and between chips and boards is an area of intense research worldwide. Now University of Washington (Seattle) and Stanford University (Calif.) have created an on-chip laser that can be electro-modulated for easy optical communications.

    Most materials from which on-chip lasers can be built are not compatible with silicon substrates, but these researchers has high hopes for their atomically thin (just 0.7 nanometers thick) laser can be integrated onto standard silicon chips.

    “Today we are using a tungsten photonics cavity sandwiched between layers of selenium, but we hope to achieve the same results with silicon nitride in the future,”

    Reply
  43. Tomi Engdahl says:

    Eurovision tellybods: Yes, you heard right – net neutrality
    Broadcasters suddenly start caring about the internet
    http://www.theregister.co.uk/2015/01/21/eurovision_organisers_issue_pro_net_neutrality_letter/

    The European Broadcasting Union, which organises the annual Eurovision musical telly glitterfest, has rounded up a group of “civil society” pressure groups to send a letter to EU leaders demanding net neutrality.

    The letter, published on Tuesday, urges the EU Council of Ministers “to support strong and clear net neutrality rules” in the so-called Telecoms Package – a proposed EU law that aims to do everything from abolishing roaming to allocating radio spectrum.

    Co-signatories of its letter include European digital rights group EDRi, the Centre for Democracy and Technology (CDT), the Chaos Computer Club (CCC), the Computer and Communications Industry Association (CCIA) and La Quadrature du Net.

    Reply
  44. Tomi Engdahl says:

    Fiber inspection tool connects to Android devices
    http://www.edn.com/electronics-products/electronic-product-reviews/other/4439039/Fiber-inspection-tool-connects-to-Android-devices?_mc=NL_EDN_EDT_EDN_productsandtools_20150330&cid=NL_EDN_EDT_EDN_productsandtools_20150330&elq=77f4ce07fa0346b28d4bdab64eae272b&elqCampaignId=22298&elqaid=25060&elqat=1&elqTrackId=c1f095bb14844458a3af6fa8a414801b

    Fiber-optic cables consisting of fibers and connectors need proper alignment to maximize the amount of light that passes through them. At OFC 2015, EXFO introduced a handheld tool for visually inspecting fiber-optic connections using your Android tablet for smartphone. The FIP-435B handheld fiber-inspection probe provides you with a view of the connection and compares it to industry standards.

    The fiber-inspection probe can automatically detect the connection, locate and center the fiber image, adjust and optimize the focus and capture, run pass/fail analysis based on standards, then save and report the results. Note the light on the probe, which indicates if a connection meets the selected industry standard. Thus, you can tell if the connection passes with out the need for a remote screen.

    Using the ConnectorMax2 Android app, you can connect the FIP-435B to an Android device over Wi-Fi.

    FIP-435B can store images and download them to the Android device for archiving. Plus, the Android app can connect to the internet using a 3G, 4G, or Wi-Fi connection and connect to a remote database for storing test data.

    Reply
  45. Tomi Engdahl says:

    TDR with dual-aspect display speeds cable fault finding
    http://www.edn.com/electronics-products/other/4439027/TDR-with-dual-aspect-display-speeds-cable-fault-finding?_mc=NL_EDN_EDT_EDN_productsandtools_20150330&cid=NL_EDN_EDT_EDN_productsandtools_20150330&elq=77f4ce07fa0346b28d4bdab64eae272b&elqCampaignId=22298&elqaid=25060&elqat=1&elqTrackId=1fe0b22ec3934535b21821b68762d301

    Offering a choice of five output impedances, the TDR2010 dual-channel time-domain reflectometer (TDR) from Megger locates faults on all metallic cables, including twisted-pair wires and coaxial cables.

    An auto-setup function determines the impedance of the cable under test, sets the unit accordingly, and selects the optimum gain and pulse width.

    The TDR2010 furnishes 25-Ω, 50-Ω, 75-Ω, 100-Ω, and 125-Ω impedances. Velocity factor can be set between 0.20 and 0.99 to meet any cable test requirement. The instrument has a minimum resolution of 0.3 ft (0.1 m) and a maximum range of 60,000 ft (20 km), depending on velocity factor and cable type.

    Reply
  46. Tomi Engdahl says:

    Researchers Claim 44x Power Cuts
    New on/off transceivers reduce power 80%
    http://www.eetimes.com/document.asp?doc_id=1326161&

    Researchers sponsored by the Semiconductor Research Corp. (SRC, Research Triangle Park, N.C.) claim they have extended Moore’s Law by finding a way to cut serial link power by as much as 80 percent. The innovation at the University of Illinois (Urbana) is a new on/off transceiver to be used on chips, between chips, between boards and between servers at data centers.

    “While this technique isn’t designed to push processors to go faster, it does, in the context of a datacenter, allow for power saved in the link budget to be used elsewhere,”

    Today on-chip serial links consume about 20 percent of a microprocessor’s power and about seven percent of the total power budget of a data center. By using transceivers that only consume power when being used, a vast amount can be saved from their standby consumption.

    The reason the links are always on today is to maximize speed. The new architecture reduces their power-up time enough to make it worth turning them off when not it use. The team estimates that data centers alone would save $870 million per year by switching to their transceiver architecture.

    Other groups have tried to build on/off transceivers, but according Hanumolu, their power-on time was too slow–in the 100s of nanoseconds range–and of course there is Energy Efficient Ethernet (IEEE 802.3az) but it requires microseconds to power-on, whereas the University of Illinois design takes just 22 nanoseconds.

    Of course, the power savings is dependent on the application, and circuits that are always-on, like clocks, would not be appropriate. However, there are so many seldom, but necessary, serial links on- and between-chips and systems that on average the new transceiver consumes 10-times less power than the convention kind, according to Hanumolu.

    The researchers estimate that serial links are idle more than 50-to-70 percent of the time on average, making a significant waste of power to leave them on all the time.

    Reply
  47. Tomi Engdahl says:

    T-Mobile ditches static coverage map
    http://www.cnet.com/news/t-mobile-rolls-out-ever-changing-map-so-you-can-check-your-coverage/

    The carrier says its new map of network coverage uses real-time data and will be updated every two weeks.

    T-Mobile is taking a new approach to the staid wireless-coverage map.

    The company unveiled on Monday its “next-gen” network map, which uses real-time data collected from customers and through third-party sources such as Speedtest.net and Inrix. What makes it next gen? It isn’t a static map, but one that will get updated every two weeks with new data. Customers will be able to drill down to an area of 100 square meters

    The network map gives consumers a chance to find out whether T-Mobile’s coverage is sufficient in their area before taking a chance on the carrier. It’s one of the ways T-Mobile is hoping to fight the perception that its network lags behind rivals such as AT&T and Verizon Wireless when it comes to breadth of coverage;

    Announcing the Next-Gen Network Coverage Map. Only at T-Mobile.
    http://newsroom.t-mobile.com/issues-insights-blog/network/next-gen-network-map.htm

    T-Mobile’s new Next-Gen Network Map reflects near real-time customer experiences on our network—based on more than 200 million actual customer usage data points every day. On top of that − to validate and augment our own collected data − our new map also incorporates additional customer usage data from trusted third-party sources, including Inrix and others.

    As we continue to rapidly enhance and expand our 4G LTE coverage to reach 300 million Americans this year, our new Next-Gen coverage map will also continue to evolve

    Reply
  48. Tomi Engdahl says:

    Ethernet Alliance plots 1.6 terabit-per-second future
    In 2025 you’ll run 100 Gbps on the server, 400 Gbps on the switch and 1 Tbps on the router
    http://www.theregister.co.uk/2015/04/01/ethernet_alliance_plots_multiterabit_future/

    Think 100 Gbps Ethernet is The Coming Thing? You ain’t seen nothing yet: one of the venerable standard’s custodians wants it going a hundred times faster by the end of another decade.

    No, this isn’t El Reg April Foolery: that’s what the Ethernet Alliance’s 2015 roadmap expects to be able to deliver.

    Between now and 2020, the group expects to have ratified the 25 Gbps speed proposed last year by the 25G Ethernet Consortium, along with 50 Gbps, 200 Gbps and 400 Gbps documents.

    As The Platform notes, a reasonable expectation by 2025 will be for most deployments to be running “100 Gbps on the server, 400 Gbps on the switch, and 1 Tbps on the router”.

    The roadmap shows that the Alliance wants to have multi-mode fibre ports carrying 25, 50, 200 and 400 Gbps lanes (distances up to 100 metres), the single mode fibres will run distances out to as much as 10 km (for the proposed 400 GBASE-LRn standard).

    The 25 and 40 Gbps twisted pair connections will require Cat8 cable, the roadmap says.

    Roadmap
    http://www.ethernetalliance.org/roadmap/

    The 2015 Ethernet Roadmap shows a roadmap for physical links through 2020 and looks into the future terabit speeds as well. The roadmap is broken into three parts

    Reply
  49. Tomi Engdahl says:

    Broadcom offers devs a peep-show inside its switches
    API not a view all the way to silicon, but a start
    http://www.theregister.co.uk/2015/03/11/broadcom_offers_devs_a_peepshow_inside_its_switches/

    Open Compute Summit When Facebook started down the path of creating the Open Compute Project (OCP), part of its reasoning was that too many vendors hide the underlying silicon from the world of users.

    That part of the project seems to be having its desired effect, with Broadcom using the OCP Summit to announce that it’s going to give developers API access to its silicon under Facebook’s FBOSS operating system and Microsoft’s Switch Abstraction Interface (the latter also shown off at the summit for the first time).

    Broadcom’s OpenNSL APIs, at GitHub here, represents a key step forward in the OCP market – the once fiercely-protecting merchant silicon becoming visible to the outside world through an API.

    Broadcom explains that OpenNSL – the Open Network Switch Library – maps “Broadcom’s software development kit (SDK) to an open northbound interface, enabling the integration of new applications and the ability to optimise switch hardware platforms”.

    Currently the APIs only support the StrataXGS Tomahawk and Trident II switches.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*