Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    Google Reveals Data Center Net
    Openflow rising at search giant and beyond
    http://www.eetimes.com/document.asp?doc_id=1326901&

    Software-defined networks based on the Openflow standard are beginning to gain traction with new chip, system and software products on display at the Open Networking Summit here. At the event, Google revealed its data center networks are already using Openflow and AT&T said (in its own way) it will follow suit.

    Showing support for the emerging approach, systems giants from Brocade to ZTE participated in SDN demos for carriers, data centers and enterprises running on the show floor.

    SDN aims to cut through a rat’s nest of existing protocols implemented in existing proprietary ASICs and applications programming interfaces. If successful, it will let users more easily configure and manage network tasks using high-level programs run on x86 servers.

    The rapid rise of mobile and cloud traffic is driving the need for SDN. For example, Google has seen traffic in its data centers rise 50-fold in the last six years, said Amin Vahdat, technical lead for networking at Google.

    In a keynote here, Vahdat described Jupiter (above), Google’s data center network built internally to deal with the data flood. It uses 16x40G switch chips to create a 1.3 Petabit/second data center Clos network, and is the latest of five generations of SDN networks at the search giant.

    “We are opening this up so engineers can take advantage of our work,” Vahdat said, declining to name any specific companies adopting its Jupiter architecture.

    For its part, AT&T said it will follow the lead of Google and Facebook, which showed its networking systems in March. Like the Web giants, AT&T is turning to ODMs to provide switches and servers it specifies for networks managed largely using open source software.

    “This is the first time telcos have done this,” said John Donovan, senior executive vice president of technology and network operations at AT&T, promising trials and deployments of the ODM systems next year.

    AT&T aims to turn purpose-built systems such as session border controllers, into software applications in the next step of virtualizing its carrier network. “We will rely on software to virtualize and control 75% of our network…with the most important 5% to establish a foundation complete by the end of year, and a lot of it based on open software,” Donovan added.

    Instead of searching for and responding to known header fields in data packets, the Openflow approach supports commands expressed as tables. Support in the latest Openflow version 1.3 for multiple tables was a key to breaking free from the limits of TCAM memories in existing systems, said Carolyn Raab, vice president of product management for Corsa (Ottawa), a startup shipping SDN switches based on FPGAs.

    Today’s merchant network switches such as those from Broadcom still rely on TCAMs, Raab said, driving the company to use FPGAs. However, next-generation chips such as Cavium’s Xplaint, shown running in three ODM systems at the show, are more amenable to handling the table-based approach of Openflow.

    Reply
  2. Tomi Engdahl says:

    Rebecca Greenfield / Bloomberg Business:
    Google’s Sidewalk Labs leads investors in acquisition of two firms behind LinkNYC initiative to bring Wi-Fi hubs to cities around the world

    Google Startup Aims to Bring Fast, Free Wi-Fi to Cities
    It’s buying two of the companies that are blanketing New York City with superfast Internet
    http://www.bloomberg.com/news/articles/2015-06-23/google-startup-aims-to-bring-fast-free-wi-fi-to-cities

    Sidewalk Labs, the Google-backed urban innovation startup unveiled this month, has picked its first project—bringing free, fast Wi-Fi to cities around the world.

    Today it announced it is leading a group of investors acquiring Control Group and Titan, among the companies working on the LinkNYC network to blanket New York City with free, superfast Wi-Fi. The two have merged to form Intersection, which will oversee the rollout of LinkNYC this fall and then seek to install similar technology in other cities.

    Reply
  3. Tomi Engdahl says:

    Ingrid Lunden / TechCrunch:
    Verizon Completes Its Acquisition of AOL For $4.4B
    http://techcrunch.com/2015/06/23/verizon-completes-its-acquisition-of-aol-for-4-4b/

    Well, that was fast: Verizon has just announced that it has completed its acquisition of AOL, owner of TechCrunch, purchasing all outstanding shares for $50 per share in cash for a total price of $4.4 billion. The sale was originally announced just over a month ago.

    Reply
  4. Tomi Engdahl says:

    Major internet providers slowing traffic speeds for thousands across US
    http://www.theguardian.com/technology/2015/jun/22/major-internet-providers-slowing-traffic-speeds

    Study finds significant degradations of networks for five largest ISPs, including AT&T and Time Warner, representing 75% of all wireline households in US

    Major internet providers, including AT&T, Time Warner and Verizon, are slowing data from popular websites to thousands of US businesses and residential customers in dozens of cities across the country, according to a study released on Monday.

    The findings come weeks after the Federal Communications Commission introduced new rules meant to protect “net neutrality” – the principle that all data is equal online – and keep ISPs from holding traffic speeds for ransom.

    The study, supported by the technologists at Open Technology Institute’s M-Lab, examines the comparative speeds of Content Delivery Networks (CDNs), which shoulder some of the data load for popular websites. Any site that becomes popular enough has to pay a CDN to carry its content on a network of servers around the country (or the world) so that the material is close to the people who want to access it.

    In Atlanta, for example, Comcast provided hourly median download speeds over a CDN called GTT of 21.4 megabits per second at 7pm throughout the month of May. AT&T provided speeds over the same network of ⅕ of a megabit per second. When a network sends more than twice the traffic it receives, that network is required by AT&T to pay for the privilege. When quizzed about slow speeds on GTT, AT&T told Ars Technica earlier this year that it wouldn’t upgrade capacity to a CDN that saw that much outgoing traffic until it saw some money from that network (as distinct from the money it sees from consumers).

    AT&T has strongly opposed regulation of its agreements with the companies that directly provide connectivity between high-traffic internet users and their customers. Cogent, Level3 and others have petitioned the FCC to make free interconnection to CDNs a part of the conditions for the proposed merger between AT&T and DirecTV.

    Reply
  5. Tomi Engdahl says:

    SpaceX and OneWeb — Same Goal, Different Technology and Strategy
    http://tech.slashdot.org/story/15/06/23/1636206/spacex-and-oneweb—-same-goal-different-technology-and-strategy

    OneWeb has announced that Airbus will manufacture their Internet-connectivity satellites and told us more about their plans and progress. Both OneWeb and their competitor SpaceX have the same goal — global Internet connectivity and backhaul using satellite constellations, but their technologies and organizational strategies are different.

    Reply
  6. Tomi Engdahl says:

    The White-Boxing of Software-Defined Networking
    White-box network router & switch providers pursue SDN
    http://www.eetimes.com/document.asp?doc_id=1326942&

    A funny thing is happening to the router and switch market on its way to software-defined networking: it’s slowly but surely getting white-boxed.

    White-box is a term that emerged in the 1990s to describe the variety of desktop PCs that were emerging from a host of small original design manufacturers (ODMs). The PCs were virtually identical in performance and software capabilities but were a fraction of the price of brand names such as IBM, HP, Dell and Compaq.

    Similarly, the widespread adoption of software-defined network (SDN) standards has made white-box based network switches possible. The switches depend not on dedicated hardware for separate network functions, but on software-based network function virtualization (NFV) that allows creation of reprogrammable data paths that allow lightning fast reconfiguration of a variety of network elements.

    This is all good news for single-board computer companies such as Taiwan-based Advantech and Austin, Texas-based Freescale Semiconductor, who are developing more powerful and flexible open-source network system solutions. The target is a switch and router market that up to now has been dominated by Intel’s X86 and IBM’s Power processors, and a handful of network system companies such as Arista, Brocade, Cisco and Juniper who dictated to their corporate customers what processors and software were to be used.

    Now all that has changed, with competition coming in the form of white-box designs from single-board computer companies such as Advantech, Adlink, and Kontron, and ODMs such as Dell, Big Switch, Pica8, Quanta, and Salom, among others.

    Rather than use proprietary software, they run on open-source software provided by either the board makers or original design manufacturers (ODMs) but sometimes by the semiconductor companies providing the silicon. White-box switches have much more diverse processor architectures. They’ve moved beyond total dependence on old standbys such as Intel’s x86 and now use processors such as MIPS and ARM, as well as a variety of SoC variations from the likes of Freescale, Broadcom, Cavium and Netronome.

    Based on this new flexibility, Crehan Research believes that, with the total cloud market expected to grow to about 12-million-plus ports by 2017, white-box deployment will increase about 32 percent a year to about 5 million data-center Ethernet white-box switch ports by 2017.

    According to Gartner Research, even though Cisco still dominates the switch/router market with a 50 percent share, the white-box segment is growing quickly. Now at 3.8 percent of the market, Gartner estimates white-boxes will constitute about 10 percent of the 18 million switch ports installed by 2018.

    The use of white-boxes is also spreading to many medium to large corporations who have online cloud services to maintain. What they all want, in addition to raw performance, is low cost, cross-platform capabilities and ease of implementation that SDN and open source generally provide. They will use any network system supplier that will satisfy those requirements, regardless of the underlying processor architecture.

    According to Paul Stevens, marketing director of the network and communications group at Advantech Corp., the board company started modestly in the network market in a few targeted segments closely related to their traditional industrial designs. “Now with SDN and the virtualization of network functions in software, our business there has exploded and we now have a diverse family of offerings using not only the X86, but network processor SoCs from the likes of Broadcom, Freescale, and Netronome.”

    Reply
  7. Tomi Engdahl says:

    Understanding the network energy efficiency challenge
    Dr Kerry Hinton ticks off seven key energy-saving techs for El Reg
    http://www.theregister.co.uk/2015/06/24/understanding_the_network_energy_efficiency_challenge/

    At the end of last week, the GreenTouch telco energy-efficiency consortium told a presumably-glittering event in New York that its five-year project to design more energy-efficient telecommunications has been a success.

    In fact, the group said, it reckons that if adopted, its approaches could improve mobile network efficiency by 10,000 times by 2020.

    Which is all well and good, but what does it all mean?

    1. Optical interconnect

    Right now, optical links span kilometres or thousands of kilometres, but they’re also valuable to cut power over shorter distances.

    The growing deployment of optical interconnect “from chip to circuit board to rack” will make a big contribution to telco and data centre energy efficiency

    2. Router efficiency

    “By and large,” Hinton said, “equipment for the core network is built for traffic of a certain type. So a router, for example, might be optimised to get maximum throughput of packets of minimum length.

    “That means machines miss the chance to be efficient for different traffic types.”

    3. Optical transponders

    As traffic grows, Hinton said, transponder energy consumption can become dominant in the core network.

    One reason is that transponders are designed for worst-case behaviour: “the transponders in the switch or router are designed for fixed-length links – whether it’s metres or kilometres.”

    It’s a problem that becomes more pressing givefn the extra energy used to process traffic at 400 Gbps or 1 Tbps.

    4. Access network

    “The biggest issue for most wireline access networks is the home router,” Hinton said – and that’s only going to get worse as carriers roll out more sophisticated, and higher-speed, services.

    When that happens, the home gateway “has to do more processing and chew up more energy”.

    5. NFV

    The other way to make the gateway use less power is to make it less sophisticated. Why have fifty devices in the same street spending five watts each on simple firewall functions (250 Watts) when you probably only need a few watts to run virtual machines in the carrier infrastructure?

    “Again, reducing what the CPU is doing [in the gateway] gives you big, big savings.”

    6. Small-cell wireless

    wireless systems that combine MIMO and beam-forming make more efficient use of the radio signal.

    7. Cellular interference

    “At the moment, when you get your mobile out of your pocket and you only see one or two bars of reception, that might be because of interference from all the other phones around you”, Hinton said.

    The crude workaround today is for the phone to assume it’s got to adjust its power upwards – which is bad for energy efficiency.

    “If all phones can reduce the power they transmit, you reduce the overall power consumption.”

    Reply
  8. Tomi Engdahl says:

    Melbourne fibre expert calls out ‘fibre cost doubled’ claim
    Em. Prof. Tucker tells fibre conference NBN rollout cost was falling
    http://www.theregister.co.uk/2015/03/24/melbourne_fibre_expert_calls_out_fibre_cost_doubled_claim/

    The idea that the planned fibre-to-the-premises (FTTP) rollout of Australia’s National Broadband Network (NBN) may have ended up meeting its cost targets has received a high-profile endorsement from emeritus professor Rod Tucker of the University of Melbourne.

    Speaking to US conference, professor Tucker said the claim in the latest strategic review that FTTP rollout ran at $AU4,300 per household had led to confusion over costs.

    Professor Tucker says the $AU2,500 per FTTP connection given by NBN in its 2012 corporate plan, and $AU2,450 given in the December 2013 strategic review may well have been nearer the mark.

    The other reason for the high price quoted in the latest NBN Co estimates, professor Tucker told the Conference on Optical Fiber Communications in Los Angeles, is that “NBN Co has not implemented many of the cost savings methods” that the 2012 and 2013 documents identified.

    Reply
  9. Tomi Engdahl says:

    IT pros blast Google over Android’s refusal to play nice with IPv6
    http://www.networkworld.com/article/2939436/lan-wan/it-pros-blast-google-over-android-s-refusal-to-play-nice-with-ipv6.html

    Two trains made of fiber, copper and code are on a collision course, as the widespread popularity of Android devices and the general move to IPv6 has put some businesses in a tough position, thanks to Android’s lack of support for a central component in the newer standard.

    DHCPv6 is an outgrowth of the DHCP protocol used in the older IPv4 standard – it’s an acronym for “dynamic host configuration protocol,” and is a key building block of network management. Nevertheless, Google’s wildly popular Android devices – which accounted for 78% of all smartphones shipped worldwide in the first quarter of this year – don’t support DHCPv6 for address assignment.

    That makes for an uncomfortable contrast with Windows, OS X, iOS, and many of the largest Linux distributions, which all support DHCPv6.

    Google developer and noted IPv6 authority Lorenzo Colliti offers several reasons for the lack of DHCPv6 implementation, including the argument that it would break legacy apps that rely on IPv4 and force developers to adopt IPv6 network address translation (with negative app performance consequences).

    “The problem I see is that stateful DHCPv6 address assignment imposes these disadvantages on users, but doesn’t actually seem to provide any *advantages* to users,” he wrote.

    That hasn’t convinced the critics, however.

    DHCPv6 is so important, in fact, that some companies have been advised to bar Android devices that can’t use the system from corporate networks by their legal departments – more than one university network operator said that legal requirements for identifying the sources of traffic, including the DMCA, made DHCPv6 crucially important.

    “We are thinking to prohibit Android because we cannot fulfill the legal requirements with reasonable effort,” one such user stated.

    “If we’re living in this BYOD world … that can be problematic, because the whole idea besides the money-saving thing is that the end-user is asking for this. By forfeiting [DHCPv6 support], I think Android and Google are causing some trouble in the near-term,” Stofega said.

    Support for alternatives like Recursive DNS Server (RDNSS) has only been available on Android since the release of Version 5.0, or Lollipop – but this isn’t OK with the IT crowd, who argue that an IPv6 implementation without DHCPv6 support is incomplete.

    According to Stofega, many large enterprises have already begun IPv6 rollouts, which makes the issue even more serious.

    Reply
  10. Tomi Engdahl says:

    The town that banned Wi-Fi
    http://www.theguardian.com/technology/2015/jun/21/the-town-that-banned-wi-fi

    ‘Electrosensitive’ people are flocking to the West Virginian home of a deep-space telescope, attracted by the rules prohibiting phones, TVs and radios. But, as Ed Cumming reveals, their arrival means Green Bank is far from peaceful

    Up and up the roads to Green Bank went, winding into the West Virginian hills as four lanes thinned to one. It was early March and snow was still spattered on the leaf mould between the firs and larches. Hip-hop and classic rock radio stations were gradually replaced by grave pastors and bawdy men twanging banjos and, eventually, they too faded to crackling white noise. The signal pips on my phone hollowed out. I was nearly there.

    Over a crest in the road was the cause of the electronic silence: the National Radio Astronomy Observatory (NRAO), an array of radio telescopes set against the indigo vastness of the Blue Ridge Mountains.

    In the same zone is another telescope, run by the National Security Agency (NSA)

    Thanks to the unusual lack of interference, the town has become a haven for those looking to escape electromagnetic radiation and over the past decade, as many as 40 people have moved here.

    It might not sound much, but Green Bank’s population was only 120 or so to begin with. Imagine two million people moving to London and demanding the city be ghost-proofed

    all the stranger when you consider that no serious scientific study has been able to establish that electrosensitivity exists. According to the World Health Organisation, “EHS [electromagnetic hypersensitivity] has no clear diagnostic criteria and there is no scientific basis to link EHS symptoms to EMF [electromotive force] exposure. Further, EHS is not a medical diagnosis, nor is it clear that it represents a single medical problem.”

    “People come here because they say they can hear the electrics,” she replied. “I don’t know if it’s a real condition or not. But the electro- sensitives swear it is, so… to each their own, I say.” She didn’t look convinced. “I don’t really mind not having a cellphone,” she added. “You get used to that. And a lot of us have Wi-Fi in our homes anyway, so that’s OK.”

    Hang on, so in The Town Without Wi-Fi, there is in fact quite a lot of Wi-Fi? I worried that this would not make for as catchy a headline as I had hoped. “Not publicly, but at home some of us do. It’s not illegal, but the observatory has a truck that can sense it. They’ll come round and ask you to turn it off.”

    Where the locals might have been happy to tolerate one or two of the sensitives, the mass migration was beyond the pale.

    Reply
  11. Tomi Engdahl says:

    Hospitals and GPs to start providing free Wi-Fi
    Government looks into viability of turning whole NHS estate in England into massive free Wi-Fi zone
    http://www.theguardian.com/society/2015/jun/17/hospitals-and-gps-to-start-providing-free-wi-fi

    Every hospital and GP surgery in England is likely to start providing free Wi-Fi in a move by the NHS to keep patients entertained and help doctors and nurses use much more technology in their work.

    The government’s National Information Board (NIB) has commissioned a feasibility study into the viability of turning the whole NHS estate in the country into a massive free Wi-Fi zone.

    It would end the situation by which some hospitals charge patients to access Wi-Fi during their stay, others provide it free, but only in certain wards, and some do not offer the service at all.

    Reply
  12. Tomi Engdahl says:

    Cloud RAN Rains Small Cells
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326933&

    A cellular network operator shares a case study of success using a cloud radio access (C-RAN) 4G LTE network built using small cell base stations.

    A mobile network operator is judged by the quality of experience it provides to its subscribers. This is not new, but the context is evolving. Consumption has shifted from voice toward an exponential growth in data demand. The vast majority of the data usage on LTE networks occurs indoors.

    As director of network operations and engineering for Nex-Tech Wireless, I must – with finite resources – respond to this ever-growing demand in hard-to-reach places. The outdoor, voice-centric strategy of providing broad but thin coverage from a small number of cell towers is no longer acceptable.

    Until recently our options have been limited when it comes to indoor coverage. The traditional solution is a distributed antenna system or DAS. In DAS, a full macro base station connects to a network of antennas distributed throughout a building.

    DAS can be effective, but at a cost. In addition to the base stations, DAS requires coax or fiber optic cabling to transmit high-bandwidth radio frequency signals to remote antennas, plus intermediate gear to power and condition the signal. DAS is expensive to design, install and configure. Many DAS systems require expensive upgrades every 2-3 years to keep up with capacity growth.

    Small cells have emerged as an alternative. Originally targeted for residential use, small cells are tiny standalone base stations with similar size and transmit power as Wi-Fi access points, but for 3G and 4G wireless technologies.

    Small cells cost less and can be deployed quicker than DAS. However, since each one acts as an independent cell they create cell borders when deployed densely throughout a large building. These borders cause radio interference and frequent cell-to-cell handovers that degrade the user experience. As a result, they require complex RF planning and they can interfere with the outdoor macro network.

    A new technology is emerging to fill the void: Cloud RAN, or C-RAN, small cells. With C-RAN, the baseband processing is centralized, allowing multiple access points to act as a single continuous cell rather than as an array of competing cells. In this respect C-RAN small cells are similar to DAS with its central base station. C-RAN small cells cost less than half as much as upgrading a DAS to LTE.

    Beyond the coliseum, there are many other facilities that can benefit from the C-RAN small cell approach. Within our operating region we have hospitals, office buildings, shopping malls and hotels where our subscribers’ demands for coverage and capacity have the potential to exhaust macro cell capabilities.

    Reply
  13. Tomi Engdahl says:

    100Mbps, $60-per-month wireless Internet comes to single-family homes
    “Hub homes” get free Internet, deliver broadband to their neighbors.
    http://arstechnica.com/information-technology/2015/06/100mbps-60-per-month-wireless-internet-comes-to-single-family-homes/

    Last week, we wrote about a wireless Internet service provider called Webpass that sells 500Mbps upload and download speeds for just $55 a month.

    That service is only available for businesses and multi-unit residential buildings in big cities because it wouldn’t be financially feasible to bring it out to the suburbs and single-family homes.

    But there’s another wireless Internet service provider that is bringing pretty fast Internet—100Mbps upload and download for $60 a month—to single-family homes. The company, Vivint, makes home security and automation technology as well as rooftop solar panels, but it expanded into broadband after customers practically begged for it.

    The company’s door-to-door sales force “got consistent feedback from customers asking, ‘Do you happen to offer Internet?’ People are dissatisfied with their ISP in many places,” Vivint Wireless General Manager Luke Langford told Ars this week.

    Each neighborhood has a “hub home” that gets free service in exchange for hosting the rooftop equipment needed to serve neighbors.

    Despite the advertised speeds and prices, switching to Vivint isn’t a slam-dunk choice. The company has gotten numerous bad reviews from customers describing outages, slower-than-promised speeds, and inadequate customer service responses.

    To be fair, all ISPs will have occasional outages and angry customers.

    Vivint’s architecture

    While the Webpass model uses point-to-point wireless, Vivint instead uses point-to-multi-point, similar to cellular networks and the rural wireless Internet providers that connect homes in far-flung areas.

    The company has its own network architecture, though, which it honed through trial and error.

    Vivint uses licensed spectrum, mostly in the 28GHz range, to send its signals from fiber-connected towers and high buildings to residential neighborhoods. Those signals travel several miles and are then picked up by equipment on top of “hub homes,” which distribute Internet service to their neighbors. Each tower has somewhere between 1Gbps and 10Gbps of bandwidth, and up to 128 hub homes can connect to a single tower, depending on tower space, spectrum availability, and fiber capacity.

    A hub home has three antennas on its roof. One connects back to the tower and the other two are access points that use 5GHz Wi-Fi airwaves to serve broadband to up to 24 homes within a 1,000-foot radius. Each access point has a 180-degree radiation pattern.

    A hub home is a home “that we identify as being one that would be a good hub for that neighborhood,” Langford said. “We approach that homeowner and we say, ‘Would you like free, fast Internet forever?’”

    Using licensed spectrum helps Vivint get signals to neighborhoods without interference, Langford said. While the 5GHz band used to create networks within neighborhoods is unlicensed, meaning anyone can use it, Vivint designed its own proprietary radios and software to distribute a good signal that can withstand stormy weather.

    The wireless ISP equipment readily available from vendors didn’t work for Vivint’s model, Langford said. It was either too expensive, making it unsuitable for delivering reasonably priced service to single-family homes, or too slow.

    Vivint’s network has gone through a few iterations.

    With Vivint’s current equipment, a single hub home provides a total of 300Mbps upload and download speeds to the surrounding houses. The network is oversubscribed just as many wired networks are

    Even during peak evening usage hours, customers can generally get close to their maximum speeds, according to Langford.

    Vivint’s latest rooftop equipment still uses the older 802.11n Wi-Fi standard, but a future upgrade to 802.11ac will boost reliability and perhaps speed as well.

    A Vivint hub home could technically offer service to locations more than 1,000 feet away and to more than 24 homes, but that would reduce speed and reliability. It also makes Vivint’s current model unsuitable for rural towns where homes are further away from each other.

    Reply
  14. Tomi Engdahl says:

    New Zealand ISPs Back Down On Anti-Geoblocking Support
    http://news.slashdot.org/story/15/06/25/001212/new-zealand-isps-back-down-on-anti-geoblocking-support

    A number of New Zealand Internet service providers will no longer offer their customers support for circumventing regional restrictions on accessing online video content. Major New Zealand media companies SKY, TVNZ, Lightbox and MediaWorks filed a lawsuit in April, arguing that skirting geoblocks violates the distribution rights of its media clients for the New Zealand market.

    NZ ISPs back down on anti-geoblocking support
    Geoblocking question unresolved after New Zealand lawsuit ends
    http://www.computerworld.com.au/article/578218/nz-isps-back-down-anti-geoblocking-support/

    Reply
  15. Tomi Engdahl says:

    Chinese tech firms fight over Wi-Fi router with ‘pregnant women’ setting
    http://www.theguardian.com/technology/2015/jun/24/pregnant-women-wi-fi-router-qihoo-xiaomi-fight-china

    Xiaomi accuses rival Qihoo of scare tactics after it markets router complete with safety mode for women expecting a child

    Move over Apple and Samsung, the latest tech rivalry is in town – and it’s to do with a router aimed at pregnant women.

    Chinese tech firm Qihoo has launched an upgrade to its P1 router, which features three settings: “wall penetration”, “balance” and … er, “pregnant women”.

    Xiamo’s post read: “The so-called pregnancy mode [of Qihoo’s router] is just a marketing tactic. Wi-Fi usage is safe, so please rest assured when using it [Xiaomi’s router].

    Hongyi claims that the upgraded P1 router protects pregnant women from any harm from signals, as it reduces radiation by around 70 percent.

    “We are targeting people who are afraid of radiation”, he said. However, in a statement to South China Morning Post, Qihoo acknowledged that no definitive link has been made between Wi-Fi signals and poor health.

    “We aren’t scientists. We haven’t done many experiments to prove how much damage the radiation from Wi-Fi can cause. We leave the right of choice to our customers.”

    Wi-Fi: are there any health risks?
    http://www.theguardian.com/technology/askjack/2012/sep/27/wi-fi-health-risks

    Hermie’s Wi-Fi uses the same radio frequency as his microwave oven. Does that mean Wi-Fi is dangerous to his health?

    Reply
  16. Tomi Engdahl says:

    Electronic Frontier Foundation:
    ICANN considering changes that would disallow proxy registration for domains used for commercial purposes, could put user privacy at risk

    Changes to Domain Name Rules Place User Privacy in Jeopardy
    https://www.eff.org/deeplinks/2015/06/changes-domain-name-rules-place-user-privacy-jeopardy

    But under a proposal [PDF] currently being considered by ICANN, that may all change. It is proposed that domains used for commercial purposes might no longer be eligible to use proxy registration services. Is TG Storytime used for commercial purposes? Well, Joe currently covers the site’s expenses, but also notes that “ads and donations may be used in the future to cover costs”, and sites that run ads have been judged as commercial in domain name disputes.

    Reply
  17. Tomi Engdahl says:

    Silicon photonics: Will the hare finally catch the tortoise?
    http://www.edn.com/electronics-blogs/eye-on-standards/4439755/Silicon-photonics-Will-the-hare-finally-catch-the-tortoise-?_mc=NL_EDN_EDT_EDN_weekly_20150625&cid=NL_EDN_EDT_EDN_weekly_20150625&elq=0dd6c755e76142a68941acc7d620b499&elqCampaignId=23611&elqaid=26649&elqat=1&elqTrackId=660361e001a949b0b8a9b0a018b81558

    My question: at what data rate do you think engineers will have to focus on designs that use photons to push data? Her answer: 50 Gbps per optical carrier for transmission farther than two meters.

    By overcoming the signal degrading characteristics of electrical interconnects—loss, messy frequency response, crosstalk, and impedance matching difficulties—with clever technologies such as pre/de-emphasis, embedded clocking, and equalization, the need for silicon photonics has been pushed into the future, beyond 28 Gbps. Optical interconnects at 10-100 Gbits/s suffer chromatic and polarization-mode dispersion, which are tiny levels of loss and reflections, but only after propagating hundreds of meters. Optical eyes stay wide open and awake over reaches of dozens of meters.

    The people who actually do silicon photonics are comfortable when it comes to including fiber-optic applications in their field. Being a fan in the cheap seats, however, I think it’s fair to distinguish fiber optics from silicon photonics. A purist’s silicon photonics is not infected with fibers, that’s fiber optics. Genuine silicon photonics applications transmit photonic signals across waveguides within a chip, or chip-to-chip, or what have you, but not through transceivers coupled to fibers. Last year at the Intel Developer’s Forum they had a whole silicon photonics demo area but all I saw was fiber optics.

    Which brings us to the cost issue. Verdiell said, “Cost of developing the chips is much higher than for regular transceivers” and isn’t “amortized over volume enough to be competitive (yet).” The second cost issue, Verdiell said, is packaging: “Silicon Photonics does not have a cost and size competitive single’mode packaging solution (yet).” Remember, single mode, as opposed to multi-mode, fibers are the ones capable of transmitting over kilometers.

    The path from electrical signaling to the purist’s Si photonics will include intermediate steps.

    Genuine silicon photonic systems without fibers transmit signals through waveguides integrated into the silicon. Because light doesn’t behave nicely in pure silicon waveguides—due to nonlinearities like two-photon absorption, stimulated raman scattering, and the kerr effect—nicely behaving optical channels are made by depositing polymer waveguides onto the silicon that includes the laser, modulator, and receiver—usually an APD (avalanche photodiode). See Figure 2. These systems don’t require independently packaged transceivers, connectors, and fibers, but they do face the developmental problem of coupling optical signals from pure silicon to polymer.

    Verdiell said that they’re pushing forward.

    Reply
  18. Tomi Engdahl says:

    Teardown: Inside a 3G MicroCell
    http://www.edn.com/design/consumer/4439769/Teardown–Inside-a-3G-MicroCell?_mc=NL_EDN_EDT_EDN_consumerelectronics_20150625&cid=NL_EDN_EDT_EDN_consumerelectronics_20150625&elq=f10aeef8dc4e413ba3157058acec9f94&elqCampaignId=23632&elqaid=26678&elqat=1&elqTrackId=08b70d3d677547e98362e65c757a3f73

    I’ve admittedly had a bit of a love-hate relationship with AT&T’s 3G MicroCell and its femtocell alternatives from Verizon Wireless and others. On the one hand, it continues to irritate me that instead of building out their own networks, they’re requiring that customers buy (although some get ‘em for free if they complain loud and long enough) a mini-cellular base station that routes cellular traffic over the customers’ broadband connections, eating into customers’ monthly broadband usage allocations in the process, and continuing to eat into customers’ monthly cellular usage allocations even though the cellular network isn’t being used. On the other hand, I can’t complain about the resultant coverage-reliability boost, and I realize that it’s not necessarily cost-effective for companies to expand coverage to low population density and/or challenging topology areas such the mountainous residence regions that I prefer.

    “a single-chip solution for HSPA femtocells compliant to TR25.820 and the newly standardized Iuh interface. Supporting up to four users for residential and SME femtocell access points and with data rates of 14.4 / 5.7Mb/s in downlink and uplink respectively the PC302 enables the lowest bill-of-materials and lowest power for a femtocell available today.”

    Speaking of antennas, where’s the cellular aerial? I suspect it’s built directly into the PCB

    AT&T Microcell Disassembly; Security Flaws Exposed
    http://linux.slashdot.org/story/12/04/04/1536224/att-microcell-disassembly-security-flaws-exposed

    Reply
  19. Tomi Engdahl says:

    Powering Ethernet, Part 1: Designing for Low Power Consumption in Operation
    http://www.edn.com/design/power-management/4439694/Powering-Ethernet–Part-1–Designing-for-Low-Power-Consumption-in-Operation

    Analyzing power consumption in Ethernet circuitry shows it to be far from efficient. This is in part because Ethernet consumes similar energy during both traffic and idle periods, and in fact, idle periods typically account for more than 97 percent of the time. This was key to determining where improvements could be made to reduce power consumption, when investigated by an IEEE task force; resulting in the standardization of IEEE 802.3az, or Energy Efficient Ethernet.

    Energy Efficient Ethernet shows great promise to universally succeed where earlier attempts to reduce idle period power have somewhat failed with methods such as Wake-on- LAN. Complimenting Energy Efficient Ethernet, additional power savings can also be made both during normal traffic and link down. This paper outlines where the current is consumed and how to design for the lowest power consumption, both in operation (Part 1) and standby (Part 2), since calculating the power consumption of an Ethernet circuit is not always straightforward.

    In the case where Ethernet datasheets publish the device only current consumption, calculating the total circuit current consumption requires the designer to add typically around 40mA per 100Base-TX or 70mA per 10Base-T PHY for dissipation in the transformers. As a result, a lower device only consumption at 10Base-T will rarely equate to lower total circuit current consumption, relative to 100Base-TX mode.

    A designer must consider the following two modes: Normal Operation and Standby, when trying to further reduce power consumption.

    The reality is long quiet periods followed by relatively short bursts of traffic

    During these quiet periods, Ethernet power consumption might be expected to significantly drop, however, this turns out to not necessarily be the case.

    1000Base-TX and 100Base-TX are both designed so that the link partners are continually synchronized to each other. To enable this, when no traffic is being transmitted the PHY will automatically send out IDLE symbols (11111 5B code). As a consequence, during any quiet period the PHY transmitter is still operating in a manner similar to full traffic – meaning it consumes a similar amount of power.

    It is strongly advisable with multi-port Ethernet devices, to disable any unused port (PHY), since simply by connecting to a link partner, around 40mA current is consumed even with no traffic present.

    10Base-T operation differs during quiet periods, since when no traffic is present, the PHY transmitter does not transmit out any IDLE symbols. Instead, it sends out a single link pulse approximately every 16ms, designed simply to keep the link alive. The power consumption of the PHY itself during a quiet period in 10Base-T operation will not reduce significantly, but the current consumed externally in the transformer will reduce to negligible, saving around 70mA per PHY compared to full traffic.

    waveform is designed to ensure that the PHY is capable of operating up to a minimum 100m of CAT5 grade cable

    However, in practice, many applications do not require the capability of 100m-cable reach and can guarantee a much shorter length. A simple change to the circuit can reduce the PHY transmitter current drive, typical set by a resistor, from the standard ±1V amplitude of the 100Base-TX signal down by up to 50 percent and still operate error free over a 10-20m reach (typical for automotive networks). For example, doubling the resistance will half the typical 40mA 100BT drive current to around 20mA per port. Longer cable reach could be achieved while operating at reduced current drive by installing higher quality cable e.g., CAT6 or above, that exhibits lower attenuation. System costs, however, are increased.

    The use of cable diagnostics features, such as Micrel’s LinkMD®, offer has the ability to measure the connected cable length, using time domain reflectrometry techniques. This allows designers to intelligently adjust the drive strength according to the cable length

    Another area important to explore when ensuring maximum energy efficiency is the power management of the Ethernet device. Many modern devices operate using a single voltage, typically 3.3V, and provide internal regulation for core voltage(s).

    Power consumption in electronic applications has increasingly been viewed as critical, with worldwide legislation forcing manufacturers to improve energy efficiency.

    Powering Ethernet, Part 2: Optimizing Power Consumption during Standby Operation
    http://www.edn.com/design/power-management/4439759/Powering-Ethernet–Part-2–Optimizing-Power-Consumption-during-Standby-Operation?_mc=NL_EDN_EDT_EDN_analog_20150625&cid=NL_EDN_EDT_EDN_analog_20150625&elq=77e7384171544bfca1b57f1318186eea&elqCampaignId=23620&elqaid=26666&elqat=1&elqTrackId=6a5ea7981f0d48a2aa25371a698c3c56

    Power consumption during the standby operation is also of concern, especially when designing to specifications such as the ‘One Watt Initiative.’ This is an energy-saving initiative by the International Energy Agency to reduce standby power-use by any appliance to not more than one watt in 2010, and 0.5 watts in 2013, which has given rise to regulations in many countries and regions and impacts device design.

    Typically, the Ethernet device power-down state is controlled via an internal register bit, which, when enabled, completely powers down the device except for the management interface.

    Power saving mode is used to reduce the PHY power consumption automatically when the cable is unplugged or link partner disabled. The receive circuit detects the presence or absence of a signal, commonly known as ‘energy detect’ to enter or exit power saving mode.

    Wake-on-LAN (WoL) seems to offer a solution to wake up the system during a low power idle state. However, the reality is that such this feature rarely achieves its goal. In WoL, a special wake up sequence is sent to the Ethernet device, which it detects and asserts an interrupt signal used to notify the host to power up the rest of the system. WoL has arguably never become popular because it is not standards-driven, nor does it have a single common defined wake up sequence, which hinders interoperability across vendors.

    A second limitation when minimizing standby power is found in the implementation of WoL functionality in the MAC, not PHY layer.

    The IEEE has clearly recognized power consumption inefficiency within Ethernet circuits. Its 802.3az task force, also known as Energy Efficient Ethernet, was targeted to reduce power consumption during periods of low link utilization (idle time).

    Reducing power consumption during low link utilization periods allows for drastic improvement in power efficiency. Known as Low Power Idle (LPI), this technique will disable parts of the PHY transceiver that are not necessary, whilst still maintaining the link integrity.

    Conclusion

    IEEE802.3az Energy Efficient Ethernet will prove to be a significant aid in reducing idle period power. Complimenting Energy Efficient Ethernet, additional power savings can also be made both during normal traffic and link down.

    Reply
  20. Tomi Engdahl says:

    The Rise Of The Cloud WAN: The trend of pushing WAN functionality into the cloud — software-defined WAN — is gaining steam as enterprises look to reduce WAN complexity and lower costs.

    Reply
  21. Tomi Engdahl says:

    MAC address privacy inches towards standardisation
    IEEE hums along to IETF anti-surveillance tune
    http://www.theregister.co.uk/2015/06/26/mac_address_privacy_inches_towards_standardisation/

    The Internet Engineering Task Force’s (IETF’s) decision last year to push back against surveillance is bearing fruit, with the ‘net boffins and the IEEE proclaiming successful MAC address privacy tests.

    While MAC address randomisation has been a feature of various clients (including Linux, Windows, Apple OSs and Android) for some time, it has yet to be written into standards.

    Hence, as part of the anti-surveillance effort it launched in May 2014, the IETF had identified MAC address snooping as a problem for WiFi users.

    In November, the IETF ran an experiment to look at whether MAC address randomisation would upset the network – for example, because two clients presented the same MAC address to an access point.

    The success of that test had to be confirmed with the IEEE, though, because the latter is the standards body responsible for 802 standards. Those standards are where the handling of the media access control address is specified, so changing the old assumption that the MAC address is written into hardware needs the IEEE’s co-operation.

    Now, the IETF and IEEE have agreed that the experiment was a success, along with further trials at the IEEE’s 802 plenary and a second IETF meeting, both in March.

    InterDigital principal engineer Juan Carlos Zuniga, who chairs the IEEE 802 Privacy Executive Committee Study Group, said the tests “set the stage for further study and collaboration to ensure the technical community prioritises Internet privacy and security”.

    Back in the 1980s when Ethernet was first created, and even in the 1990s when WiFi was born, little thought was given to the risk that the MAC address could put personal privacy at risk.

    The blossoming of mobile computing, smartphones, and public WiFi, however, means that fixed, unique identifiers no longer look like such a good idea.

    Reply
  22. Tomi Engdahl says:

    G.fast is coming, so get ready, telcos tell ACCC
    Definition matters
    http://www.theregister.co.uk/2015/06/26/gfast_is_coming_so_get_ready_telcos_tell_accc/

    The Communications Alliance has told the Australian Competition and Consumer Commission it wants deployment rules to leave room for G.fast in the future.

    At the moment, G.fast can’t be deployed in Australia, because the rules governing the copper don’t allow the use of spectrum above 2 MHz (the same issue gets in the way of VDSL2 – which is one reason why the NBN restricts fibre-to-the-node speeds).

    “If a service description for an SBAS includes the use of spectrum above 2.2 MHz on the metallic twisted pair cable then it will effectively exclude existing broadband service provided via technology such as ADSL, ADSL2+ and SHDSL”, the submission notes.

    Reply
  23. Tomi Engdahl says:

    WiFi Offloading is Skyrocketing
    http://mobile.slashdot.org/story/15/06/25/2157218/wifi-offloading-is-skyrocketing

    WiFi Offloading is skyrocketing. This is the conclusion of a new report from Juniper Research, which points out that the amount of smartphone and tablet data traffic on WiFi networks will will increase to more than 115,000 petabytes by 2019, compared to under 30,000 petabytes this year, representing almost a four-fold increase. Most of this data is offloaded to consumer’s WiFi by the carriers, offering the possibility to share your home internet connection in exchange for “free” hotspots. But this article on InformationWeek Network Computing also warns that “The capacity of the 2.4GHz band is reaching its limit. [...]

    WiFi Offloading To Skyrocket
    http://www.networkcomputing.com/wireless-infrastructure/wifi-offloading-to-skyrocket-/d/d-id/1321007

    Carriers will offload a four-fold increase in mobile data traffic to WiFi networks by 2019, Juniper Research predicts.

    Cellular carriers will offload nearly 60% of mobile data traffic to WiFi networks over the next four years, according to a new study from Juniper Research.

    Carriers in North America and Western Europe will be responsible for over 75% of the global mobile data being offloaded in the next four years, Juniper said. The amount of smartphone and tablet data traffic on WiFi networks will will increase to more than 115,000 petabytes by 2019, compared to under 30,000 petabytes this year, representing almost a four-fold increase.

    WiFi offloading, also called carrier WiFi, has become pervasive as many big cellular carriers and ISPs have deployed large numbers of WiFi hotspots in cities using the existing infrastructure of their customers’ homes and businesses. This enables carriers to offload the saturated bandwidth on 3G and LTE networks.

    “With WiFi-integrated small cells, seamless data services can be extended to non-cellular devices as well, such as cameras and WiFi-only tablets, offering operators the opportunity to develop new revenue streams,” wrote Nitin Bhas, head of research at Juniper Research.

    WiFi offloading currently offers a good solution to cellular data bottlenecks, but operators cannot rely solely on residential customers to carry the bulk of the data.

    “Operators need to deploy [their] own WiFi zones in problematic areas or partner with WiFi hotspot operators and aggregators such as iPass and Boingo,” Bhas added.

    Reply
  24. Tomi Engdahl says:

    James Surowiecki / MIT Technology Review:
    Google Fiber has pushed ISPs to improve broadband speeds in many markets, a result the FCC’s National Broadband Plan failed to achieve

    The Wait-for-Google-to-Do-It Strategy
    America’s communications ­infrastructure is finally getting some crucial upgrades because one company is forcing ­competition when regulators won’t.
    http://www.technologyreview.com/review/538411/the-wait-for-google-to-do-it-strategy/

    Reply
  25. Tomi Engdahl says:

    Pew Internet:
    US Internet adoption patterns by age, class, race, and urban vs. rural over the last 15 years — Americans’ Internet Access: 2000-2015 — As internet use nears saturation for some groups, a look at patterns of adoption

    Americans’ Internet Access:
    2000-2015
    http://www.pewinternet.org/2015/06/26/americans-internet-access-2000-2015/

    As internet use nears saturation for some groups, a look at patterns of adoption

    Reply
  26. Tomi Engdahl says:

    A new wave of US internet companies is succeeding in China—by giving the government what it wants
    http://qz.com/435764/a-new-wave-of-us-internet-companies-is-succeeding-in-china-by-giving-the-government-what-it-wants/

    Facebook found itself shut out from China in 2009. Twitter got blocked the same year. In 2010, Google pulled its search services from China after a government hack. Beijing, it seems, was sending a message to high-profile American internet companies: play by our rules and censor content, or don’t play at all.

    After Google’s exit, those three firms have yet to come back. But in recent years, other American internet companies have found a degree of success in China—or at least a bit more stability than their predecessors.

    The solution involves sacrifice—hand over data and control, and the Chinese government will hand you the keys to the market.

    Reply
  27. Tomi Engdahl says:

    Ofcom: We need 5G spectrum planning for the future’s ultramobes
    We’ll just have to make decisions based on ‘imperfect knowledge
    http://www.theregister.co.uk/2015/06/29/ofcom_need_5g_spectrum_planning_future_ultramobes/

    Despite the majority of Reg readers thinking that 5G can wait, there needs to be some planning as to how spectrum will be allocated if the same frequencies are to be made available globally.

    The need for international harmony on use of millimetre spectrum was the focus of a recent presentation by UK regulator Ofcom to the LTE World Summit in Amsterdam.

    Andrew Hudson, director of spectrum policy, told delegates: “Everyone wants maximum flexibility to do what they want to do in the future, but it’s very difficult to say now exactly what they want to do in five, ten or 15 years time. So you have to try and make those spectrum decisions on imperfect knowledge.”

    The 5G standards have not yet been set and a lot will depend on the outcomes from the World Radio Conference next November, which will set the global standards for the use of spectrum. What is clear is that the mobile world is looking for a lot of spectrum at very high frequencies.

    Huawei has said that 5G will need at least 100MHz of contiguous spectrum, while Ericsson has proposed 500MHz. Providing the space to multiple operators will, of course, multiply the requirement.

    Reply
  28. Tomi Engdahl says:

    Is that a FAT PIPE or are you just pleased to stream me? TERABIT fibre tested
    Trials conducted over a 1,000km long link
    http://www.theregister.co.uk/2015/06/29/proximus_huawei_trial_terabit_fibre_speed/

    Proximus and Huawei have successfully trialled a super-channel optical signal, flinging out information at up to one terabit per second (Tbps).

    Tech lothario Huawei shacked up with Belgian box-wrecker Proximus back in January.

    The pairing has now produced a single super-channel optical transport network (OTN) card with a transmission speed of a pretty hefty 1Tbps, running along Proximus’ optical backbone.

    Although the speed is claimed to be a record, Alcatel-Lucent and BT managed to achieve 1.4 Tbps using BT’s fibre-optic pipe between the BT Tower and BT’s Adastral Park in Ipswich.

    Proximus/Huawei’s transmission speed was conducted over a 1,040km fiber link using an advanced “Flexgrid” infrastructure with Huawei’s Optical Switch Node OSN 9800 platform.

    The companies claim their approach increases the capacity on a fiber cable by compressing the gaps between transmission channels. “The technique increases the density of the transmission channels on fiber, making it around 150 per cent more efficient than today’s typical 100Gbps core network links,” they said.

    Reply
  29. Tomi Engdahl says:

    2550100 … An Illuminati codeword or name of new alliance demanding faster Ethernet faster?
    What do we want? 25GbitE. When do we want it? NOW
    http://www.theregister.co.uk/2015/04/15/alliance_wants_faster_ethernet_to_arrive_faster/

    An alliance called 2550100 has been announced by QLogic and others to deliver faster Ethernet faster – starting with 25GbitE to deliver better-than-10gig speed without jumping all the way to 40gig.

    There is a 2550100.com website, which lists 13 members, including DataCore, Finisar, HDS, Huawei, Lenovo, SuSE, QLogic (of course), X-IO and Zadara. Obvious holes in the list include Broadcom, Brocade, Emulex and Cisco.

    QLogic’s Ahmet Houssein, Ethernet products senior veep, said; “25Gb, 50Gb and 100Gb represents multiple breakthroughs for the Ethernet industry. 25Gb brings economy to high-performance networking, 100Gb puts Ethernet at parity with InfiniBand, while the underlying chips and protocol stack are designed to meet the needs of both hyperscale and enterprise customers.”

    Server connectivity is the key, although storage arrays will love faster Ethernet access too. It makes iSCSI go faster and – you can see Cisco nodding approvingly – FCoE (Fibre Channel over Ethernet) more attractive. This FCoE pump-priming aspect makes it odder that Cisco isn’t involved in this 2505100 go-getter group.

    Fundamentally, the development of 25GbitE products was driven by hyperscale cloud providers who needed a more economical network speed between 10G and 40G. Twinning 25GBitE adapters to reach 50gig is better – cheaper and faster – than quadrupling 10gig adapters to reach 40gig.

    Reply
  30. Tomi Engdahl says:

    Google wants to bring free wifi to the world…. and it’s starting NOW
    http://metro.co.uk/2015/06/25/google-wants-to-bring-free-wifi-to-the-world-and-its-starting-now-5265352/?ito=facebook

    No longer will we have to buy an overpriced coffee to get a rubbish wireless internet connection. *Fist pump*

    Google is rolling out free wifi in New York as part of a trial the company hopes will eventually span across the whole of the world.
    How will they do this you ask – through turning 10,000 of the big apple’s old phone booths into ad-supported ‘Wi-Fi pylons’.

    Read more: http://metro.co.uk/2015/06/25/google-wants-to-bring-free-wifi-to-the-world-and-its-starting-now-5265352/#ixzz3eRhmFfkk

    Reply
  31. Tomi Engdahl says:

    Scientists Have Broken One of the Biggest Limits in Fibre Optic Networks
    Posted Sunday, June 28th, 2015 (9:22 am) by Mark Jackson (Score 2,651)
    http://www.ispreview.co.uk/index.php/2015/06/scientists-have-broken-one-of-the-biggest-limits-in-fibre-optic-networks.html

    One of the biggest limitations in ultrafast fibre optic communications is that even optical signals can weaken over long distances, which requires expensive electronic regenerators (repeaters) to be placed along the route to boost the signal. But what if you didn’t need those? Future networks could be both faster and cheaper.

    Last year a research group (Photonics Systems Group) working out of the University of California in San Diego (USA), specifically its Qualcomm Institute, published a paper that theorised how it might be possible to pre-empt “the distortion effects that will happen in the optical fiber” and in so doing you could also remove the interference.

    Information sent down fibre optic cables is generally split into multiple channels of communication (up to 200 can be used in modern cables), which each operate at different frequencies (i.e. light can be split into different colours [red, green, blue etc.] and this is a good way of visualising how the different channels are separated).

    The team were able to boost the power of their transmission some 20 fold and push data over a “record-breaking” 12,000km (7,400 miles) long fibre optic cable. Amazingly the data was still intact at the other end and that was achieved without using repeaters and only needing standard amplifiers (the cost of repeaters plays a big part in the overall expense of such networks).

    The engineers then used this comb to synchronize the frequency variations of the different streams of optical information (optical carriers), which can compensate in advance for the crosstalk interference (this will be familiar to those who have been reading about FTTC / VDSL2 Vectoring technology on copper cables) that can occur between multiple communication channels within the fibre optic cable. The frequency comb also ensures that the crosstalk interference is reversible.

    In some approaches this calculation of pre-distortion is already done with existing systems, but not across all the channels together.

    The solution, assuming it propagates well into an existing commercial environment (the cost of transmitters might still be an issue), suggests that the future cost of both national and international data capacity could be about to fall.

    Comments:

    The team have done an impressive experiment, but their press office could do with some wide reading.

    Pre-distortion of signals is already used in the fibre systems deployed by BT, Virgin Media, Vodafone, O2, SSE, and many others. The same coherent technology is already doing 22,000km unrepeatered across the Pacific. A 20 fold launch power improvement is only 13dB, which is about 50km.

    What is new is processing all the channels together to calculate the pre-distortion. Lovely idea for the lab, but wouldn’t work in practice where channels are deployed one at a time, as each transmitter costs as much as a house (so you don’t deploy them unless you use them).

    Reply
  32. Tomi Engdahl says:

    James Surowiecki / MIT Technology Review:
    Google Fiber has pushed ISPs to improve broadband speeds in many markets, a result the FCC’s National Broadband Plan failed to achieve — Broadband Speeds Are Improving in Many Places. Too Bad It Took Google to Make It Happen. — It’s too often said that some event “changed everything” in technology.

    The Wait-for-Google-to-Do-It Strategy
    http://www.technologyreview.com/review/538411/the-wait-for-google-to-do-it-strategy/

    America’s communications ­infrastructure is finally getting some crucial upgrades because one company is forcing ­competition when regulators won’t.

    Reply
  33. Tomi Engdahl says:

    EBU backing for net neutrality rules
    http://www.broadbandtvnews.com/2015/06/26/119321/

    Europe’s public service broadcasters have called on EU governments to ‘make history’ by setting out the rules for the future of the open internet.

    In a declaration issued by the EBU, members said net neutrality rules are necessary to strengthen freedom of expression in the digital age, to foster knowledge for citizens, to leverage incentives for Europe’s creative industries and to boost innovation.

    EBU President Jean-Paul Philippot said: “PSM organisations in Europe share the view that strong net neutrality rules need to be one of the foundations of the Internet of tomorrow. Without clear and strong rules, access to online-content risks becoming confined to walled gardens rather than widely available in open spaces.”

    Reply
  34. Tomi Engdahl says:

    Study: The Internet Has Finally Become TV
    http://gizmodo.com/so-many-people-are-cutting-the-cord-we-need-more-intern-1707226252

    In the next five years, more than 50 percent of the world’s population will have internet access, and 80 percent of internet traffic will be devoted to video, says a new study by Cisco. But it’s not just billions more dinky YouTube videos that will suck up all that bandwidth. It’s our shifting TV habits.

    The number of online videos and and the size of those videos is skyrocketing as more and more of us are ditching the traditional cable package and turning to our internet-enabled devices to watch television. What’s more, we’ll increasingly be streaming really big video files, like the high-quality 4K video needed to play on HD monitors. By 2019, 30 percent of internet-connected TVs are expected to be 4K.

    “The cord-cutting household [consumes] more than twice as much data per month as non-cord-cutters,” Cisco exec Robert Pepper tells the Washington Post.

    In other words, cord-cutters aren’t only going to change the business of television, they’re also going to dramatically change the amount of internet that we need. Consider this: Global IP traffic is five times as big as it was five years ago, and will triple threefold over the next five years. Next year, worldwide IP traffic will reach 1.1 zettabytes per year (1 zettabyte is 1000 exabytes; one exabyte is one billion gigabytes). That number will go up to two zettabytes in 2019.

    We’ve already heard about two potential problems here: We might run out of IP addresses and our internet infrastructure might not be able to handle all that data.

    By 2019, traffic from wireless and mobile devices will exceed traffic from wired devices, accounting for two-thirds of all traffic. Right now, wifi/mobile represent 54 percent.

    Reply
  35. Tomi Engdahl says:

    Slideshow: IMEC Innovates in Wireless
    http://www.eetimes.com/document.asp?doc_id=1327003&

    Interuniversity Microelectronics Centre (imec), a micro-and nanoelectronics research center headquartered in Leuven, Belgiumregarding. Their research, especially in the wireless arena, has greatly impressed me with their technical prowess

    Reply
  36. Tomi Engdahl says:

    End of roaming charges: presidency strikes a deal with European Parliament
    http://www.consilium.europa.eu/en/press/press-releases/2015/06/30-roaming-charges/

    In the early hours of 30 June 2015, after 12 hours of negotiation, the Latvian presidency reached a provisional deal with the European Parliament on new rules to end mobile phone roaming fees and safeguard open internet access, also known as net neutrality rules. For the Council, the agreement still has to be confirmed by member states.
    End of roaming fees in mid-2017

    Under the agreement, roaming surcharges in the European Union will be abolished as of 15 June 2017. However, roaming providers will be able to apply a ‘fair use policy’ to prevent abusive use of roaming. This would include using roaming services for purposes other than periodic travel.

    Safeguards will be introduced to address the recovery of costs by operators.

    Roaming fees will already go down on 30 April 2016, when the current retail caps will be replaced by a maximum surcharge of €0.05 per minute for calls, €0.02 for SMSs and €0.05 per megabyte for data.

    Protecting open internet

    Under the first EU-wide open internet rules, operators will have to treat all traffic equally when providing internet access services. They may use reasonable traffic management measures. Blocking or throttling will be allowed only in a limited number of circumstances, for instance to counter a cyber-attacks and prevent traffic congestion. Agreements on services requiring a specific level of quality will be allowed, but operators will have to ensure the general quality of internet access services.

    Reply
  37. Tomi Engdahl says:

    Glyn Moody / Ars Technica UK:
    EU plans to destroy net neutrality by allowing Internet fast lanes — Potentially a big defeat for EU internet users, online startups and future innovation. — A two-tier Internet will be created in Europe as the result of a late-night “compromise” between the European Commission, European Parliament and the EU Council.

    EU plans to destroy net neutrality by allowing Internet fast lanes
    Potentially a big defeat for EU internet users, online startups and future innovation.
    http://arstechnica.co.uk/tech-policy/2015/06/eu-plans-to-destroy-net-neutrality-by-allowing-internet-fast-lanes/

    A two-tier Internet will be created in Europe as the result of a late-night “compromise” between the European Commission, European Parliament and the EU Council. The so-called “trilogue” meeting to reconcile the different positions of the three main EU institutions saw telecom companies gaining the right to offer “specialised services” on the Internet. These premium services will create a fast lane on the Internet and thus destroy net neutrality, which requires that equivalent traffic is treated in the same way.

    But running alongside this “open Internet,” on the same network, there will be “specialised services,” which are not open and where paid prioritisation is permitted: “The new EU net neutrality rules guarantee the open Internet and enable the provision of specialised or innovative services on condition that they do not harm the open Internet access.” The caveat is vague, and in practice will not prevent “specialised services” competing with those offered on the “open Internet”—the Commission mentions “internet TV” as an example of a specialised service—so large companies will be able to offer premium services at attractive prices, which startups with limited resources will find hard to match.

    The Commission is aware of that threat to fair competition. In its press release, it says the new rules will mean that “access to a start-up’s website will not be unfairly slowed down to make the way for bigger companies.” However, this only applies to its newly-defined “open Internet,” where all traffic must be treated fairly. It does not apply to specialised services, which will be able to pay telecoms companies for faster delivery than rivals on the “open Internet.” Inevitably, this tilts the playing field in favour of established players with deeper pockets.

    The Commission’s main argument for introducing “specialised services” is to encourage new offerings: “more and more innovative services require a certain transmission quality in order to work properly, such as telemedicine or automated driving.” But it would be unwise to run these kinds of critical services over a connection that was also running traditional Internet services: instead, a dedicated connection used only for that service would be needed. In that case, prioritisation and net neutrality would not be an issue because it would not be used for anything else.

    The new rules are further biased in favour of incumbents by allowing zero rating: “Zero rating, also called sponsored connectivity, is a commercial practice used by some providers of Internet access, especially mobile operators, not to count the data volume of particular applications or services against the user’s limited monthly data volume.” This, too, creates a two-tier Internet

    Calling the details of the deal “blurry” and “ambiguous,” particularly on “specialised services,” Joe McNamee of the digital rights group EDRi wrote today: “This is ‘just’ a provisional agreement. First, the explanatory recitals need to be finalised. Then, the EU institutions need to decide if they are really prepared to create such legal uncertainty for European citizens and business. This will become clear in the coming weeks.”

    Reply
  38. Tomi Engdahl says:

    Study: Wi-Fi Devices Require Minimum Distance from Medical Equipment
    http://www.medicaldesignbriefs.com/component/content/article/1104-mdb/news/22431

    The electromagnetic radiation caused by wireless technology can interfere with electronic medical equipment and lead to serious clinical consequences for patients. New research from Concordia University helps to define safety parameters for health-care workers carrying Wi-Fi devices.

    Hospitals often specify that staff members carrying wireless transmitters not approach sensitive electronic medical devices any closer than a designated minimum separation distance (MSD). The research team set out to study if the policy truly affects the risk of electromagnetic interference.

    “We found that MSD policy really does work. If hospital staff comply fully with the policy, they can have a tablet in the same room as the patient and medical equipment without posing a danger,”

    According to the study, the risk reduces rapidly by increasing the MSD from zero to a small value. The risk does not decrease further, however, with larger minimum separation distances.

    Is your tablet a risk to hospital care?
    http://www.concordia.ca/news/cunews/main/stories/2015/06/16/research-study-examines-wireless-transmitters-in-hospitals.html

    A Concordia study examines whether wireless transmitters represent a danger to patients

    Thousands of patients die each year in hospitals across North America due to medical errors that could be prevented were doctors and nurses provided with instant access to patient records via wireless technology. Cue the Catch-22: the electromagnetic radiation caused by those very devices can interfere with electronic medical equipment and thus lead to serious clinical consequences for patients.

    Luckily, that could soon change thanks to new research from Concordia University that helps define a clear rule of thumb for how close health-care workers with their Wi-Fi devices can be to electronic medical equipment.

    In a study published recently in IEEE Transactions on Electromagnetic Compatibility,

    Hospitals often specify that staff members carrying wireless transmitters not approach sensitive electronic medical devices any closer than a designated minimum separation distance (MSD).

    “We found that MSD policy really does work. If hospital staff comply fully with the policy, they can have a tablet in the same room as the patient and medical equipment without posing a danger,” says Ardavan.

    “We observed that the risk reduces rapidly by increasing the MSD from zero to a small value. After that, the risk doesn’t decrease when you increase the MSD beyond a level that we call the optimal MSD. This indicates that specifying larger minimum separation distances doesn’t necessarily increase safety.”

    The bottom line: keep your wireless device further than arm’s length from medical equipment and the risk of interference is very small.

    Reply
  39. Tomi Engdahl says:

    Ericsson Layoffs Target R&D, Supply
    Cuts and reorg a result of losses to Huawei, insider says
    http://www.eetimes.com/document.asp?doc_id=1327014&

    Ericsson will release 10% of its Swedish workforce as part of an efficiency and cost-reduction plan first announced in March. The plan will cut approximately 2,100 jobs and save an equivalent $301 million USD in the second quarter of 2015.

    “The official motivation was the decline of 2G and leapfrog over 3G from operators switching to 4G,” said a former Ericsson employee who wished to remain anonymous. “The main reason is flat sales and significant loss of business to Huawei.”

    Reply
  40. Tomi Engdahl says:

    Trevor Hughes / USA Today:
    FBI investigating 11 physical attacks on internet fiber optic cables in San Francisco-area dating back a year, including one Tuesday on Level 3 and Zayo

    FBI investigating 11 attacks on San Francisco-area Internet lines
    http://www.usatoday.com/story/tech/2015/06/30/california-internet-outage/29521335/

    The FBI is investigating at least 11 physical attacks on high-capacity Internet cables in California’s San Francisco Bay Area dating back a year, including one early Tuesday morning.

    Agents confirm the latest attack disrupted Internet service for businesses and residential customers in and around Sacramento, the state’s capital.

    FBI agents declined to specify how significantly the attack affected customers, citing the ongoing investigation. In Tuesday’s attack, someone broke into an underground vault and cut three fiber-optic cables belonging to Colorado-based service providers Level 3 and Zayo.

    “When it affects multiple companies and cities, it does become disturbing,” Wuthrich said. “We definitely need the public’s assistance.”

    Reply
  41. Tomi Engdahl says:

    Jeremy Horwitz / 9to5Mac:
    GigSky leverages Apple SIM to offer iPad cellular data plans in 90+ countries/territories
    http://9to5mac.com/2015/06/30/gigsky-apple-sim-ipad-cellular/

    Despite lackluster support from major carrier partners, Apple’s carrier-agnostic Apple SIM demonstrated its potential today with the announcement that GigSky — “the first global mobile network designed for travelers” — will offer short-term iPad Air 2 and iPad mini 3 cellular data plans in over 90 countries and territories. “Now there’s no need to pick-up a local SIM, hunt for Wi-Fi, or travel in fear of excessive data roaming charges,” GigSky explained. “iPad users can choose a GigSky data plan upon arrival right from their iPad Air 2 or iPad mini 3 with Apple SIM installed, and easily connect to family and friends, stay in touch and share their travel.”

    GigSky’s list of countries notably covers much of North America, Europe, Australia and New Zealand, with patchier offerings across South America, Asia, and Africa. Data prices vary dramatically by location.

    Reply
  42. Tomi Engdahl says:

    ‘Watered down’ net neutrality rules could mean ‘almost anything’
    Lobbyists wail and gnash teeth over deal born of darkness
    http://www.theregister.co.uk/2015/07/01/eu_net_neutrality_rules_could_mean_anything/

    The EU negotiators’ proposed new rules on net neutrality – reached in the early hours of this morning – have caused serious concerns among digital rights activists, who cite loopholes and vagueness.

    After three months of to-ing and fro-ing between member states and the European Parliament (a situation described by parliament insiders as less “give and take” and more “take, take, take” by the national negotiators) a last-ditch deal was pushed through at 3am.

    Although the phrase “net neutrality” has not appeared at all in recent leaks of the EU’s Telecoms Package, it does now claim to “safeguard equal and non-discriminatory treatment of traffic” on the internet.

    Net providers would be banned from blocking or throttling internet speeds for certain services for commercial reasons.

    However, due in large part to the intransigence of national negotiators, several loopholes have been left open. For example, internet traffic can be managed to deal with “temporary or exceptional congestion”, but there is no explanation as to what constitutes “temporary” or “exceptional”.

    Dutch MEP Marietje Schaake (Liberals and Democrats) had pushed for “clearer language that unequivocally safeguards net neutrality in Europe [and] the compromise reached is a watered down version of the strong ambitions of the European Parliament.”

    “We need to make sure Europe can lead in safeguarding the open internet, fostering innovation, and ensuring fair competition in the digital single market,” she added.

    Although more positive about the deal, James Waterworth, of industry NGO the Computer and Communications Industry Association, also said the real worth of the law would be in its interpretation: “The agreement is right to allow for traffic management and specialised services, but also right to prohibit discrimination and ensure that traffic is treated in a non-discriminatory manner.”

    “However, the rules will only be worth something if effectively supervised by national telecoms regulators. This will be the critical next step,” he added.

    Reply
  43. Tomi Engdahl says:

    ICANN’s leaving the nest, so when will it grow up?
    The org that will run the internet still acts like a teenager
    http://www.theregister.co.uk/2015/06/25/icann_opinion/

    ICANN is 17 years old. It’s about to be given the keys to its dad’s car. And we are all going to have to take a ride with it every day.

    On June 30, 2016, ICANN, which oversees the global domain name service (DNS), will take over the IANA contract on a semi-permanent basis from the US government.

    The IANA contract means very little to everyday internet users, just as ICANN means very little to those not involved with the internet’s technical underpinnings. But both are fundamental to how the internet works. The fact you have probably never heard of them is an endorsement of the job they are doing.

    Transparency: We’ve heard of it

    Now that ICANN has largely won its fight to win control of the IANA contract, rather than see it given to a third party, the corporation is fighting on a second front: accountability.

    As a condition of moving the IANA contract, the internet community insisted – and was backed up by the US government – that a range of improvements be made to the organization’s “accountability.” This topic has been a persistent problem with ICANN, to the extent that it has undergone no less than seven formal accountability reviews in the last decade.

    Culture clash

    It is easy to laugh off such transparent efforts to control the conversation, but a deeper truth – and much bigger problem – lies at the heart of them.

    Inside ICANN, the criticisms of the corporation’s lack of accountability and opaque decision-making were taken personally. Staff went to enormous lengths to bypass or undermine those critics, planning a seemingly endless series of meetings in an effort to find the right answer to take to the board, and a long list of queries into all of the other proposals so they could be taken off the table.

    The corporate affairs team took a different approach.

    Just as ICANN was showing real signs of maturity, it lapsed. Rather than using its greater autonomy to step up to the plate, the prevailing atmosphere within the organization was that it couldn’t believe its luck. And then, with the arrival of a new CEO and the approval of the money-minting new gTLD program, ICANN more than quadrupled its own budget. It’s now a child with both fewer constraints and more money to spend.

    Now in 2016, with the transitioning of the IANA contract, ICANN is finally coming of age and the US government can no longer expect to keep it in its house. Rather than sending forth a well-prepared and mature young adult, however, we’re letting loose a know-it-all teenager with a chip on its shoulder and a determined belief that it doesn’t have to listen to anyone.

    Reply
  44. Tomi Engdahl says:

    How Verizon Is Hindering NYC’s Internet Service
    http://tech.slashdot.org/story/15/07/01/0224222/how-verizon-is-hindering-nycs-internet-service

    Verizon promised to make FiOS available to all New York City residents. The deadline passed a year ago, and many residents still don’t have FiOS as an option, but Verizon claims to have done its part. “The agreement required Verizon to ‘pass’ homes with fiber (not actually connect them), but no one wrote down in the agreement what they thought ‘pass’ meant. (Verizon’s interpretation, predictably, is that it doesn’t have to get very close.)”

    How to Stop Verizon from Screwing New York City
    https://medium.com/backchannel/how-to-stop-verizon-from-screwing-new-york-city-218a397e9d0b

    Installing fiber — future-proof, infinitely upgradeable by swapping out electronics, potentially unlimited data capacity, and now the global standard — is a long-term play in a context in which shareholders are accustomed to quarter-over-quarter dividend increases and frequent buybacks. So Wall Street punishes the idea. And Wall Street also isn’t happy about cheap, competing commodity networks, which means incumbents are focused on selling content along with data. Which, in turn, means buying programming, which means paying ever-higher prices as the concentrated companies that produce that content consolidate.

    In the meantime, rather than install fiber, Verizon can continue to wring profits from its existing, and pretty much outdated, copper network in the form of increasingly obsolete DSL access. These are married to pay TV services from satellite provider DirecTV. Verizon has zero reason to be enthusiastic about ripping up that copper, particularly in areas where it expects that subscription levels for its extravagantly expensive FiOS bundled services will be low.

    But I have a suggestion. Here’s a plan for New York City, one that has the potential to be a win for everyone concerned: Cut a different deal with Verizon. Make Verizon into the operator of a passive, neutral fiber network that (as in Seoul and Stockholm) is connected to every single home and business. (This would require an expanded network, which is necessary in any case.) Release Verizon from the shackles of serving customers and acquiring programming. Let other ISPs emerge that will actually have the relationships with customers. Set a reasonable price for provision of wholesale fiber access that Verizon must charge to any ISP.

    Result: a thriving, competitive retail marketplace for high-speed Internet access that reaches every single New Yorker (including, particularly, the 36% of the city’s residents who now don’t have a connection at home). Cheap prices for world-class data connections. Economic growth for the city, as it retakes its position as a global metropolis. Cheaper healthcare costs, as residents replace expensive in-person visits with equally in-person, but remote, contacts. And on and on.

    Verizon is already mostly a wireless company — more than 60% of its revenues come from Verizon Wireless — because it thinks its profits are more certain there.

    Reply
  45. Tomi Engdahl says:

    UC San Diego Researchers Amp Up Internet Speeds
    http://www.informationweek.com/strategic-cio/digital-business/uc-san-diego-researchers-amp-up-internet-speeds/d/d-id/1321088?

    Researchers at UC San Diego have blown through expected limits of data transmission on fiber optic cable, paving a new lane for faster Web surfing.

    Photonics researchers at the University of California, San Diego have increased the maximum power, and therefore the distance, at which optical signals can be sent through optical fibers, indicating a new path towards ultra high-speed Internet connectivity.

    The team of electrical engineers broke through key barriers that limit the distance information can travel in fiber optic cables and still be accurately deciphered by a receiver — information traveled nearly 7,5000 miles through fiber optic cables with standard amplifiers and no electronic regenerators.

    The test results suggest the potential for elimination of electronic regenerators placed intermittently along the fiber link.

    The laboratory experiments involved setups with both three and five optical channels, which interact with each other within the silica fiber optic cables, but the researchers noted this approach could be used in systems with far more communications channels.

    The official name of the paper is “Overcoming Kerr-induced capacity limit in optical fiber transmission.”

    “Today’s fiber optic systems are a little like quicksand,”

    “With quicksand, the more you struggle, the faster you sink,” Alic added. “With fiber optics, after a certain point, the more power you add to the signal, the more distortion you get, in effect preventing a longer reach. Our approach removes this power limit, which in turn extends how far signals can travel in optical fiber without needing a repeater.”

    The same research group published a paper last year outlining the fact that the experimental results they are now publishing were theoretically possible, and the university has also filed a patent on the method and applications of frequency-referenced carriers for compensation of nonlinear impairments in transmission.

    “Crosstalk between communication channels within a fiber optic cable obeys fixed physical laws. It’s not random,”

    The Kerr effect, also called the quadratic electro-optic effect (QEO effect), is a change in the refractive index of a material in response to an applied electric field — a challenge which the team at UC San Diego appears to have surmounted.

    Reply
  46. Tomi Engdahl says:

    Reuters:
    Sources: FCC to approve AT&T’s $48.5B DirecTV acquisition with conditions as soon as next week
    http://www.reuters.com/article/2015/06/30/us-dirctv-at-t-m-a-idUSKCN0PA2Y820150630

    AT&T Inc’s (T.N) proposed $48.5 billion acquisition of DirecTV (DTV.O) is expected to get U.S. regulatory approval as soon as next week, according to people familiar with the matter, a decision that will combine the country’s No. 2 wireless carrier with the largest satellite-TV provider.

    Reply
  47. Tomi Engdahl says:

    U.S. to Run Out of IPv4 Addresses This Summer
    http://www.pcmag.com/article2/0,2817,2484216,00.asp

    An Internet storm is brewing, and not everyone is prepared.

    Specifically, we’re running low on IPv4 addresses, a supply that is expected to be exhausted by this summer in North America.

    Every desktop and laptop, server, scanner, printer, modem, router, smartphone, and tablet is assigned an IP address, which serves as a unique identifier to track online movements.

    As PCMag explained in 2011, IP version 4 (IPv4) is the most widely deployed standard. But there is a finite amount of these IP addresses that can be distributed, and it’s crunch time. Asia and Europe are already out, and the U.S. is up next, according to The Wall Street Journal.

    US Supply Of IPv4 Addresses Running Out
    http://www.technewstoday.com/23731-us-supply-of-ipv4-addresses-running-out/

    Reply
  48. Tomi Engdahl says:

    IPv4 address stock dwindles as North American database runs dry
    http://www.v3.co.uk/v3-uk/news/2416031/ipv4-address-stock-dwindles-as-north-american-database-runs-dry

    The number of available IPv4 address spaces has fallen so low that the US organisation responsible for handing out addresses has rejected a request because there was not enough stock.

    The American Registry for Internet Numbers (ARIN) posted a note on its website confirming the move, although it did not say from where the request had come.

    “ARIN activated the IPv4 Unmet Requests policy this week with the approval of an address request that was larger than the available inventory in the regional IPv4 free pool,” said ARIN chief executive John Curran.

    The move does not mean that there are no IPv4 addresses left, but that requests will have to be smaller to be accommodated or applicants will have to wait for blocks of address space to be returned.

    Curran encouraged companies to consider the use of IPv6 instead.

    ARIN is the latest major holder of IPv4 addresses to confirm that it now has limited stock, after similar announcements by organisations in Asia in 2011, Europe in 2012 and Latin America in 2014.

    Reply
  49. Tomi Engdahl says:

    Iljitsch van Beijnum / Ars Technica:
    Large blocks of IPv4 addresses are no longer available in North America; organizations can either join the wait list or request smaller blocks

    It’s official: North America out of new IPv4 addresses
    But the move to IPv6 picks up speed.
    http://arstechnica.com/information-technology/2015/07/us-exhausts-new-ipv4-addresses-waitlist-begins/

    ARIN, the American Registry for Internet Numbers, has now activated its “IPv4 Unmet Requests Policy.” Until now, organizations in the ARIN region were able to get IPv4 addresses as needed, but yesterday, ARIN was no longer in the position to fulfill qualifying requests. As a result, ISPs that come to ARIN for IPv4 address space have three choices: they can take a smaller block (ARIN currently still has a limited supply of blocks of 512 and 256 addresses), they can go on the wait list in the hopes that a block of the desired size will become available at some point in the future, or they can buy addresses from an organization that has more than it needs.

    “If you take a smaller block, you can’t come back for more address space for 90 days,” John Curran, CEO of ARIN, told Ars. “We currently have nearly 500 small blocks remaining, but we handle 300 to 400 requests per month, [so] those remaining small blocks are going to last between two and four weeks.”

    “IPv6 is going to happen, that’s the direction it’s going,” she said. “But it’s going to take a while. Organizations are not ready to turn to IPv6 tomorrow; this will take a few years. A transfer market allows for the transition from IPv4 to IPv6 in a responsible way, not a panicked way.”

    “The price for blocks of IPv4 addresses of 65,536 addresses (a /16) or smaller is about $7 to $8 per address in the ARIN region. In other regions, which have fewer addresses out there, the price tends to be a little higher,”

    Goodman stressed that buyers of addresses should make sure they are “clean” and have a known history. There have been reports of address sales where the addresses turned out to be in ongoing use after completion of the transaction.

    Bring on the IPv6!

    The Internet Engineering Task Force (IETF) saw the eventual depletion of IP addresses looming in the early 1990s, so they set out to solve the problem and came up with a new version of the Internet Protocol. The old IP has version number 4; the new version is 6.

    The trouble is that, of course, old systems can only handle the IPv4 with its 32-bit addresses. That problem has pretty much been solved in the intermediate decade, and today virtually all operating systems can handle 128-bit IPv6 addresses—although some applications can’t or don’t handle them properly.

    The main issue remaining is that most networks simply haven’t enabled IPv6 yet. Although turning on IPv6 is not as hard as some people think, it’s not entirely trivial either in larger networks. Internet Service Providers, routers, firewalls, load balancers, and DNS servers must all be IPv6-ready and be reconfigured. And then there are all those little (and not so little) homegrown applications that keep businesses running. In almost all cases, a new IPv6 numbering plan is required, and DHCP works differently with IPv6 than with IPv4.

    And as someone smart recently said about ISPs adopting IPv6, referring to Metcalfe’s Law, “If everyone is doing it, you have to do it, too.”

    Reply
  50. Tomi Engdahl says:

    North America down to its last ~130,000 IPv4 addresses
    If you want more, join this orderly queue, maybe forever, say number wonks
    http://www.theregister.co.uk/2015/07/03/america_down_to_its_last_130000_ipv4_addresses/

    The unmet requests policy doesn’t mean the IPv4 well is dry. ARIN still has small blocks of IPv4 addresses but warns it “may not be able to fulfill requests for IPv4 address space. This means we may be unable to fulfill a particular customer request for certain sizes of IPv4 address blocks, though not necessarily that we are out of IPv4 addresses entirely.”

    ARIN administers IP addresses for “Canada, many Caribbean and North Atlantic islands, and the United States,” so this isn’t just a problem for the United States.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*