Telecom and networking trends for 2016

In the end of 2015 there were 3.2 billion people online. 67% of Americans now have broadband at home, compared to 70% in 2013, and 13% connect via smartphone only vs 8% in 2013; smartphone penetration in US at 68%. The share of Americans with broadband at home has plateaued, and more rely only on their smartphones for online access. We can see downtick in home high-speed adoption has taken place at the same time there has been an increase in “smartphone-only” adults – those who own a smartphone that they can use to access the internet, but do not have traditional broadband service at home. The American broadband market is notoriously oligopolistic with the majority of citizens offered limited choice, especially at the high-speed end, complete with high monthly fees.

Fixed Internet speeds increase – even without fiber to every house.  We will start to see more 1Gbps Internet connections – and not all of them need fiber (2014 was the year of “fiber everywhere”). For example Comcast ‘rolls out’ ‘world’s first’ DOCSIS 3.1 modem, pumping 1Gbps over existing cable. It should, in theory, be quick and easy to get 1Gbps broadband to your home using DOCSIS 3.1, but I expect we will see only very few experimental roll-outs of the service in 2016. The beauty of DOCSIS 3.1 is that it is backwards compatible.

Mobile networks continue to lead the way when it comes to connecting people for the next generation of communications: Mobile subscriptions are now at 7.1 billion globally, and more than 95% of the world’s population are now within reach of a mobile network signal. Mobile cellular subscriptions have overtaken fixed phone subs, mobile broadband subscriptions and households with Internet access. This development most probably causes expectations that Network jobs are hot so salaries are expected to rise in 2016 as especially wireless network engineers, network admins, and network security pros are needed.

There are still some 350 million people globally who have no way of Internet access, mobile or otherwise, and there will be some race to get connections to at least some of those people. High stakes in broadband satellites race as building a satellite network and associated ground-based facilities and user terminals to provide Internet access to even the remotest and poorest parts of the world will be a huge technical, regulatory, and business challenge. Data versions of low Earth orbiting (LEO) satellite networks started appearing in the late 1990’s, followed with mobile telephony via LEO satellites, but never managed to deliver on the hype—partly because of technology constraints or poor business models. Over years there have been huge technology advances in satellites: they can now be made much smaller and lighter, so launch costs are significantly lower. Also component costs associated with the different terminals and handsets have plummeted. These factors have clearly helped the business proposition, but there are still challenges.

There will be new radio frequencies available for wireless communications thanks to WRC-15 Spectrum DecisionsIn addition to confirming the use of the 700 MHz band (technically 694 to 790 MHz) for mobile broadband services in ITU Region 1, which includes Europe, Africa, the Middle east and Central Asia, delegates also agreed to harmonize 200 MHz of the C-band (3.4 to 3.6 GHz) to improve capacity in urban areas and used in small cells, and the L-band (1427-1518 MHz) to improve overall coverage and better capacity. So the mobile broadband sector now has, at least in the short to medium term, three globally harmonized bands. There was also decision for spectrum to be used for wireless avionics intra-communications (WAIC).

5G gets started. Just five years after the first 4G smartphone hit the market, the wireless industry is already preparing for 5G: cell phone carriers, smartphone chip makers and the major network equipment companies are working on developing 5G network technology for their customers. There are still many challenges as 5G infrastructure must be able to serve the billions of internet-connected objects of small appliances in addition to large consumers of information.700MHz harmonization is a key feature in operators’ plans to begin rolling out 5G services and C-band is also likely to be used for 5G. After 2016 to get the fastest promised 5G speeds very high frequency bands that will need to be deployed for 5G services, mainly above 24 GHz.

5G will not only be about a new air interface with faster speeds, but it will also address network congestion, energy efficiency, cost, reliability, and connection to billions of people and devices. Many believe that a critical success factor for 5G will be a fully revamped TCP/IP stack and a group of major vendors has put forward an open source TCP/IP stack OpenFastPath they say is designed to reinvigorate the ancient and rather crusty protocol. Cyber security research will be important important in 2016 as 5G networks will be critical infrastructure, on top of which for example. transport, industry, health and the new operators set up their business around 2020. Growing network virtualization functionality and programmability are both an opportunity and a threat to security. Keep in mind that everything connected to the Internet can, and will be hacked.

Heightened interest in the Internet of Things (IoT) and of Everything (IoE) will continue in 2016. IoT networks heat up in 2016 as low-power wide area networks for the Internet of Things have been attracting new entrants and investors at a heady pace with unannounced offerings still in the pipeline for 2016 trying to enable new IoT apps by undercutting costs and battery life for cellular and WiFi. There are many competing technologies in this field, and some will turn out to be winners and some losers. Remember that IoT is forecasted to be 50 billion connections by 2020, so there is lots of business opportunities for many IoT technologies.

 

Network Icon

2016 will be another booming year for Ethernet. Wi-Fi is obviously more convenient than wired Ethernet cables for average mobile user. But Ethernet still offers advantages — faster speeds, lower latency, and no wireless interference problems. Ethernet matters a lot with desktop PCs, laptops at desks, game consoles, TV-streaming boxes, and other devices – like when building backbone networks and data centers. Assuming it’s easy enough to plug the devices in with an Ethernet cable, you’ll get a more consistently solid connection. Yes, Ethernet is better.

The augmented global demand for data centers is the key driver for the growth in Global Ethernet Switch and Router Market 2016-2020.25G, 50G and 100G Ethernet is finding it’s place in in the Data Center. Experts predict that the largest cloud operators will shift to 100G Ethernet fabrics while cost-efficient 25G and 50G will remain the workhorses for most of the other well-known data-center companies.The increasing usage of advanced technologies, such as 10GbE ports, by enterprises and universities for educational and official purposes, is a significant factor in the enterprise and campus segment. The key players in this segment will be Arista Network, Brocade Communications, Cisco, Dell, HP, Huawei and Juniper Network. The 2015 Ethernet Roadmap shows a roadmap for physical links through 2020 and looks into the future terabit speeds as well.

I expect 2016 will be a year of widespread product adoption around 2.5 and 5 Gigabit Ethernet (GE) bandwidth over twisted-pair copper cabling (2.5GBASE-T and 5GBASE-T) as transition to next generation 802.11ac Wave 2 access points will drive significant demand for 2.5G ports. Enterprise operators are looking to fill the gap between 1G and 10G over this legacy unshielded twisted-pair copper cabling (Category 5e/Category 6) that is installed all over. IEEE 802.11ac is 3x faster and 6x more power efficient than its predecessor, 802.11n, while remaining interoperable with 802.11n.  Rapid adoption of 802.11ac is run by fact that tablets and smartphones are becoming ubiquitous in the workplace.

Driven by IEEE standards, Ethernet hits the road in 2016: A new trend emerging in the automotive market in 2016 is the migration of Ethernet, a tried-and-true computer network technology, into connected cars. The proliferation of advanced driver assistance system (ADAS) features in many vehicles is also expected to expand Ethernet use. The completion of IEEE 100BASE-T1 and 1000BASE-T1 standards are both expected. The emergence of the 1000BASE-T1 standard in mid-2016 provides a roadmap for automotive Ethernet evolution. Ethernet, starting in 2016, will be seen as the dominant in-vehicle network backbone.

Prepare for the PAM4 phase shift. PAM4 (four-level pulse-amplitude modulation) will be coming to wider use in 2016 because we all the time need faster communications links between ICs inside devices. NRZ won’t work at 56 Gbps and it seems that PAM-4 is the way to go as PAM4 doubles the bit rate for a given baud rate over NRZ. At 56 Gbps, 400 Gbps Ethernet can be realized with four lanes of PAM4 but might require eight 28 Gbps lanes with NRZ. PAM-4 is also gaining traction in 28 Gbps links. The bad news is that PAM4 trades off bandwidth for SNR (signal-to-noise ratio) meaning it is more sensitive to noise and timing skew than NRZ. PAM4 does bring SNR (signal-to-noise ratio) to the forefront of design issues. With four voltage level and three eyes, PAM4 requires new design techniques for recovering embedded clocks and for identifying bits in symbols. PAM4 will be used mainly on copper links, but it can be also used with fiber optic links, which has it’s own set of challenges. These and other issues are forging new techniques for how to measure and simulate PAM4 signals.

Cloud Scale Networking term will be seen. The virtualization of networks, storage, and servers is reshaping the way organizations use IT. Cloud computing plays an essential role in this process as cloud delivers the additional capacity required to satisfy growing demand to an enterprise or small business from a third party. The amount of data volume carried by networks has exploded. Cisco estimated last year that by 2017, data centers will handle some 7.7 zetabytes of IP traffic, two thirds of which would be on account of cloud computing. Total global data centre traffic is projected to triple by the end of 2019 (from 3.4 to 10.4 Zettabytes). Legacy, tiered, network designs can be replaced with scalable flat network topologies. They can be future-proofed using open, scalable SDN and NFV platforms. The network is cloud computing’s final frontier, at technology, people and process levels. Service providers seek to reduce costs, create new business opportunities, and introduce new services more quickly.

The “software-ization” of Telco and increasing use of pen-Source Networking will continue in 2016. In 2015, the adoption of OpenStack, OpenDaylight, OpNFV for software and services, and Open Compute for hardware will supported more virtualized, more open source network computing platforms and architecture. The trend will continue. SDN provides control to the enterprises and carriers on the complete network through a single logical point, thereby simplifying the network design and operation. The traditional, one-vendor, proprietary solution is transitioning to solutions involving many suppliers – and this offers customers with significant cost savings and performance optimization. Growing network virtualization functionality and programmability are both an opportunity and a threat to security. Keep in mind that everything connected to the Internet can, and will be hacked.

After COP21 climate change summit reaches deal in Paris there will be also interest in thinking how clean our networking is. It is being reported that communications technologies are responsible for about 2-4% of all of carbon footprint generated by human activity. The needs for communications and faster speeds is increasing in this every day more and more connected world – penetration of smart devices there was a tremendous increase in the amount of mobile data traffic from 2010 to 2014. When IoT is forecasted to be 50 billion connections by 2020, with the current technologies this would increase power consumption considerably. The trend to look for greener technologies is tackling first mobile networks because of their high energy use. Base stations and switching centers could count for between 60% and 85% of the energy used by an entire communication system. More and more facilities, especially big names like Google, Amazon and Microsoft, have looked to renewable energy.

 

820 Comments

  1. Tomi Engdahl says:

    Rockport’s Torus prises open hyperscale network lockjaw costs
    Hyperscalers rejoice; toroidal doughnut fabrics open the door to 10,000 node and beyond networking
    http://www.theregister.co.uk/2015/12/30/rockports_torus_prises_open_hyperscale_network_costs/

    Hyperscale IT is threatened by suicidally expensive networking costs. As node counts head into the thousands and tens of thousands, network infrastructure costs rocket upwards because a combination of individual node connections, network complexity, and bandwidth in a traditional (leaf-and-spine) design has a toxic effect on costs.

    The effect is exacerbated when the individual nodes are cheap as chips, relative to servers and storage arrays, as with kinetic disk drives and, eventually, SSDs. Kinetic drives have individual Ethernet connectivity, and an on-board processor responding to object style GET and PUT commands. Seagate commenced building this kind of disk drive with its Kinetic series in 2013. Western Digital/HGST and Toshiba followed suit and a KOSP industry consortium has been formed to drive standards.

    If you imagine a hyperscale data centre has an archive array formed from kinetic drives then a 10,000-node network is a realistic conception. Traditional data centre networking is based on a hierarchical 3-layer core-distribution-access device/switches design

    This is transitioning to a 2-tier spine and leaf design better suited to larger-scale networks and east-south traffic flows across the fabric rather than up and down a hierarchy. Arista, for example, advocates such a design.

    Rockport Networks, a networking startup, says this too becomes complex, unwieldy and costly as network node counts move from 1,000 to 10,000 and above. The company is developing its own torus-based networking scheme* to combat the spine and leaf disadvantages.

    Reply
  2. Tomi Engdahl says:

    Networking and data center equipment afe normally installed to standard 19 inch racks.
    There are places where 19″ rack is too big.

    It seems that for SOHO and small companies networking applications there are seems to be smaller 10 inch rack getting support for many manufacturers.

    Other rack formats you might encounter:

    A 23-inch rack is used for housing telephone (primarily), computer, audio, and other equipment though is less common than the 19-inch rack.

    Open Rack is a mounting system designed by Facebook’s Open Compute Project that has the same outside dimensions as typical 19-inch racks (e.g. 600 mm width), but supports wider equipment modules of 537 mm or about 21 inches.

    Reply
  3. Tomi Engdahl says:

    Katie Collins / CNET:
    Nokia’s €15.6B public exchange offer for Alcatel-Lucent successful, French regulator says, companies to start merging operations on January 14

    Nokia aims to hit it big in broadband
    http://www.cnet.com/news/nokia-aims-to-hit-it-big-in-broadband/

    French regulator approves Nokia’s takeover of Alcatel-Lucent, which should help it compete in the global telecom equipment market.

    Reply
  4. Tomi Engdahl says:

    EFF Confirms: T-Mobile’s Binge On Optimization is Just Throttling, Applies Indiscriminately to All Video
    https://www.eff.org/deeplinks/2016/01/eff-confirms-t-mobiles-bingeon-optimization-just-throttling-applies

    Back in November, T-Mobile announced a new service for its mobile customers called Binge On, in which video streams from certain websites don’t count against customers’ data caps.1 The service is theoretically open to all video providers without charge, so long as T-Mobile can recognize and then “optimize” the provider’s video streams to a bitrate equivalent to 480p. At first glance, this doesn’t sound too harmful

    However, as Marvin Ammori wrote in Slate, there is another “feature” of Binge On that has many customers complaining. Ammori pointed out that T-Mobile is applying its “optimization” to all video, not just the video of providers who have asked T-Mobile to be zero-rated. T-Mobile claims it does this to provide a better experience for its customers, saying that

    “T-Mobile utilizes streaming video optimization technology throughout its network to help customers stretch their high-speed data while streaming video”

    We were curious what exactly this optimization technology involved, so we decided to test it out for ourselves. We posted a video on one of our servers and tried accessing it via a T-Mobile LTE connection using various methods and under various conditions.

    Test Results: No Optimization, and Everything Gets Throttled

    The first result of our test confirms that when Binge On is enabled, T-Mobile throttles all HTML5 video streams to around 1.5Mps, even when the phone is capable of downloading at higher speeds, and regardless of whether or not the video provider enrolled in Binge On. This is the case whether the video is being streamed or being downloaded—which means that T-Mobile is artificially reducing the download speeds of customers with Binge On enabled, even if they’re downloading the video to watch later. It also means that videos are being throttled even if they’re being watched or downloaded to another device via a tethered connection.

    The second major finding in our tests is that T-Mobile is throttling video downloads even when the filename and HTTP headers (specifically the Content-Type) indicate the file is not a video file. We asked T-Mobile if this means they are looking deeper than TCP and HTTP headers, and identifying video streams by inspecting the content of their customers’ communications, and they told us that they have solutions to detect video-specific protocols/patterns that do not involve the examination of actual content.

    Our last finding is that T-Mobile’s video “optimization” doesn’t actually alter or enhance the video stream for delivery to a mobile device over a mobile network in any way. 2 This means T-Mobile’s “optimization” consists entirely of throttling the video stream’s throughput down to 1.5Mbps. If the video is more than 480p and the server sending the video doesn’t have a way to reduce or adapt the bitrate of the video as it’s being streamed, the result is stuttering and uneven streaming—exactly the opposite of the experience T-Mobile claims their “optimization” will have.

    Dear T-Mobile: Stop Futzing With Your Customer’s Traffic

    Reply
  5. Tomi Engdahl says:

    Steve Dent / Engadget:
    WiFi Alliance approves 802.11ah HaLow WiFi standard in 900 MHz band for IoT devices, with double the range of today’s WiFi, lower power consumption

    New WiFi standard offers more range for less power
    The WiFi Alliance’s 900MHz ‘HaLow’ standard is aimed at connected home devices.
    http://www.engadget.com/2016/01/04/new-wifi-standard-gives-more-range-with-less-power/

    The WiFi Alliance has finally approved the eagerly-anticipated 802.11ah WiFi standard and dubbed it “HaLow.” Approved devices will operate in the unlicensed 900MHz band, which has double the range of the current 2.4GHz standard, uses less power and provides better wall penetration. The standard is seen as a key for the internet of things and connected home devices, which haven’t exactly set the world on fire so far. The problem has been that gadgets like door sensors, connected bulbs and cameras need to have enough power to send data long distances to remote hubs or routers. However, the current WiFi standard doesn’t lend itself to long battery life and transmission distances.

    The WiFi Alliance said that HaLow will “broadly adopt existing WiFi protocols,” like IP connectivity, meaning devices will have regular WiFi-grade security and interoperability. It added that many new products, like routers, will also operate in the regular 2.4 and 5GHz bands.

    Reply
  6. Tomi Engdahl says:

    IPv6 Turns 20, Reaches 10 Percent Deployment
    http://tech.slashdot.org/story/16/01/04/1519200/ipv6-turns-20-reaches-10-percent-deployment

    Ars notes that the RFC for IPv6 was published just over 20 years ago, and the protocol has finally reached the 10% deployment milestone. This is an increase from ~6% a year ago. (The percentage of users varies over time, peaking on the weekends when most people are at home instead of work.) “If a 67 percent increase per year is the new normal, it’ll take until summer 2020 until the entire world has IPv6 and we can all stop slicing and dicing our diminishing stashes of IPv4 addresses.”

    IPv6 celebrates its 20th birthday by reaching 10 percent deployment
    All I want for my birthday is a new IP header.
    http://arstechnica.com/business/2016/01/ipv6-celebrates-its-20th-birthday-by-reaching-10-percent-deployment/

    10 percent!

    First the good news. According to Google’s statistics, on December 26, the world reached 9.98 percent IPv6 deployment, up from just under 6 percent a year earlier. Google measures IPv6 deployment by having a small fraction of their users execute a Javascript program that tests whether the computer in question can load URLs over IPv6. During weekends, a tenth of Google’s users are able to do this, but during weekdays it’s less than 8 percent. Apparently more people have IPv6 available at home than at work.

    Of course existing IPv4 addresses aren’t going anywhere, but without a steady supply of fresh IP addresses, it’s hard to keep the Internet growing in the manner it’s accustomed to. . For instance, when moving to a new place, you may discover your new ISP can’t give you your own IPv4 address anymore, and puts you behind a Carrier Grade NAT (CGN) that makes you share one with the rest of the neighborhood. Some applications, especially ones that want to receive incoming connections, such as video chat, may not work well through certain CGNs.

    If a 67 percent increase per year is the new normal, it’ll take until summer 2020 until the entire world has IPv6 and we can all stop slicing and dicing our diminishing stashes of IPv4 addresses.

    Why the delay?

    IPv6 has been around for two decades now. So why are we still measuring IPv6 deployment in percentage points per year, while Wi-Fi didn’t exist yet in 1995 and has used up the entire alphabet to label its variants since then, with many users on IEEE 802.11ac today?

    Most of the time, when a standard gets updated or replaced, only two devices need to care.

    Unfortunately, the Internet Protocol is different. The sending system needs to create an IP packet. Then, all the routers along the way (and any firewalls and load balancers) need to look at that IP packet to be able to send it on its way. Finally, the receiving system needs to be able to understand the IP packet to get at the information contained in it. Even worse, the applications on the sending and receiving ends often have to look at IP addresses and thus know the difference between an IPv4 address and an IPv6 address. So we can’t just upgrade a server and a client application, or two systems on opposite ends of a cable. We need to upgrade all servers, all clients, all routers, all firewalls, all load balancers, and all management systems to IPv6 before we can retire IPv4 and thus free ourselves of its limitations.

    So even though all our operating systems and nearly all network equipment supports IPv6 today (and has for many years in most cases), as long as there’s just one device along the way that doesn’t understand the new protocol—or its administrator hasn’t gotten around to enabling it—we have to keep using IPv4. In that light, having ten percent of users communicate with Google over IPv6 isn’t such a bad result.

    Reply
  7. Tomi Engdahl says:

    Broadcom Frees CPU in New Router Chip
    http://www.eetimes.com/document.asp?doc_id=1328558&

    Broadcom announced a 64-bit quad-core processor for high-end routers during International CES, held here Jan. 6-9. The processor will target smart home, Internet of Things (IoT), and enterprise applications that require increasingly higher throughput.

    The BCM4908 processor includes a 28nm 1.8GHz ARM CPU alongside Broadcom’s network packet processor to deliver more than 5 Gbits/second of system data throughput without taxing the CPU. The CPU would be left free to run a variety of software on the router, the company says.

    The chip also supports the increased speeds coming into the home through services such as Google Fiber using an interface for a 2.5 Gigabit/s Ethernet PHY. Broadcom officials said routers can achieve over 3.5 Gbits/s combined speed when paired with the company’s wave2 5G WiFi MU-MIMO chip.

    “We’re enabling OEMs to build more powerful home routers that address the increased bandwidth requirements needed to support the continued consumption of high-bandwidth content, the growing demand for UltraHD as well as the growing emergence of more IoT and smart home applications,”

    Additionally, the router processor supports tri-band 802.11ac MU-MIMO Wi-Fi – three BCM4366 4×4 radios, each with an integrated CPU for host offload processing – to provide seven CPU cores with more than 9.6 GHz of CPU compute power.

    By leaving the router CPU free from Wi-Fi processing, Broadcom expects its offering to go beyond smart home and enterprise to network attached storage

    Reply
  8. Tomi Engdahl says:

    3.3 billion people use the internet. Internet World Stats statistics that Western business user is, however, a small minority. Most of the users of the network will be called. developing countries.

    For example, in November, China had 674 million Internet users. In India the figure was 354 million. One third of the network user, therefore, comes from China or India.

    Third is the United States 281 million Internet users.

    Relationship with the Western countries and the rest of the world in the future will only increase. For example, in the USA the Internet already has nearly 90 percent of the population. Instead, India, less than 30 percent of the population is only online. China network users proportion of the population has already risen above 50 per cent.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3797:internet-on-kiinalais-intialainen&catid=13&Itemid=101

    Reply
  9. Tomi Engdahl says:

    Long-distance 11ac wireless outdoor access point/client bridge deploys point-to-point Wi-Fi up to 5 mi.
    http://www.cablinginstall.com/articles/2015/12/engenius-long-distance-wifi.html?cmpid=EnlCIMCablingNewsJanuary42016&eid=289644432&bid=1265815

    EnGenius Technologies has announced the expansion of its EnStation line of wireless access points/bridges with the addition of the EnStationAC, billed as a long-range, high-powered 11ac outdoor access point that delivers stable, high-bandwidth connections over distances up to 5 miles in a point-to-point configuration.

    “The EnStationAC is an easy-to-deploy, high-performance WiFi solution that provides wireless speeds up to 867 Mbps for point-to-point and point-to-multipoint deployments over short or long distances where Ethernet or even fiber cabling is not possible or practical,” stated a company press release

    “By pairing two EnStationAC units in a point-to-point deployment and aligning the dish-shaped antennas, the narrow-focused, wireless beam provides high bandwidth and reliable high-speed connectivity over extremely long distances. Ideal for deployments in campuses, parks and farming properties, or in complex environments such as warehouses, sports arenas, or urban areas, deploying EnStationAC lowers CAPEX and OPEX by eliminating the costs of trenching, cabling, permits and services required to run cables to expand the network.”

    Reply
  10. Tomi Engdahl says:

    New OpenDNSSEC doesn’t want you to … ride into the danger zone
    (With apologies to pop sysadmin Kenny Log-ons)
    http://www.theregister.co.uk/2016/01/04/opendnssec_catches_up_with_expanding_use/

    A new version of OpenDNSSEC – an open-source implementation of DNSSEC – is hoping to plug a problem it is happy to have: increased use.

    Release candidate of version 1.4.9 was put out Monday for testing, with the key new feature being the ability to deal with a large number of zones – more than 50.

    “Too much concurrent zone transfers causes new transfers to be held back. These excess transfers however were not properly scheduled for later,” the release notes highlight.

    It is a problem that the OpenDNSSEC team, which largely comprises engineers from a number of country-code top-level domains such as the UK’s Nominet, Canada’s CIRA, the Netherlands’ SIDN and Sweden’s IIS, is happy to see.

    DNSSEC enables internet infrastructure companies, including registries and ISPs, to digitally sign their zones and so make it much harder for people to spoof DNS traffic. The protocol is notoriously difficult and expensive to implement however, which has led to slower-than-hoped-for uptake.

    The OpenDNSSEC team started work back in March 2009 on a system that would make it simpler and hence cheaper to implement the protocol, releasing version 1.0.0 just under a year later.

    The software handles the complex process of signing a zone automatically and includes secure key management, all of which means fewer manual operations.

    https://lists.opendnssec.org/pipermail/opendnssec-announce/2016-January/000110.html

    Reply
  11. Tomi Engdahl says:

    TP-Link announces the ‘world’s first’ 802.11ad router
    http://www.engadget.com/2016/01/06/tp-link-talon-ad7200/

    Two days after Acer’s announcement of the world’s first 802.11ad laptop, and we now have a router that supports it. The TP-LINK Talon AD7200 is, obviously, the world’s first 802.11ad router. What’s 802.11ad? It’s a 60GHz WiFi standard for that sits on top of the existing 2.4GHz and 5GHz bands. It’s designed specifically for short-distances — think line of sight, in the same room — and tremendous speeds.

    The Talon AD7200 supports transfer rates of up to 7.2Gbps by combining multiple bands together.

    TP-Link isn’t ready to announce the price just yet. It’ll divulge such details closer to the router’s launch in the second quarter.

    Reply
  12. Tomi Engdahl says:

    Free Cell Data Transfer with Slowest Morse Code Ever
    http://hackaday.com/2016/01/06/free-cell-data-transfer-with-slowest-morse-code-ever/

    Readers of a certain age will remember the payphone trick of letting the phone ring once and then hanging up to get your quarter back. This technique was used with a pre-planned call time to let someone know you made it or you were okay without accruing the cost of a telephone call. As long as nobody answered you didn’t have to pay for the call, and that continues to be the case with some pay-per-minute cellphone plans.

    This is the concept behind [Antonio Ospite’s] ringtone data transfer project called SaveMySugar.

    SaveMySugar: exchange messages using only phone rings
    http://ao2.it/en/blog/2016/01/04/savemysugar-exchange-messages-using-only-phone-rings

    Back in 2006, a friend of mine asked: “Antonio, would it be possible to send messages for free using only phone rings?”.

    So I spent some time on the problem and found a solution which kind of worked; but it was a quick and dirty hack and I didn’t publish it.

    In the following years, from time to time, I got back to thinking about this project; then, after almost ten years, the friend asked again if I was ever going to complete it, and so I decided to give it another go and publish a more solid prototype.

    And you know, when you can finally name something it becomes easier to analyze, the name in this case was “pulse-distance modulation”.

    If we consider a phone ring as the rising edge of a pulse we can represent information using the distance between rings, but for that to work we have to consider the fact that the time between placing a call and having the phone ringing on the other side is not a constant

    Reply
  13. Tomi Engdahl says:

    Facebook Messenger Hits 800M Users: 2016 Strategy And Predictions
    http://techcrunch.com/2016/01/07/beyond-messaging/

    Reply
  14. Tomi Engdahl says:

    Half of AT&T’s networks are controlled by open-source SDN code
    Fun fact of the day, we think you’ll find
    http://www.theregister.co.uk/2016/01/08/att_expanding_sdn/

    AT&T says it has replaced nearly half of the software in its vast operations with open-source software-defined networking (SDN) code.

    Speaking to developers just before this year’s CES conference kicked off on Tuesday, technology and operations veep John Donovan dropped that figure as evidence that the operator’s SDN strategy is working.

    Donovan said there are now “millions” of AT&T wireless subscribers connected to virtualized network services – for example, many will be relying on the so-called AT&T Integrated Cloud (AIC), which is based on OpenStack.

    The US telco ended 2015 with AIC deployed to 74 of its locations around the world, and has more than 275 businesses using it, we’re told. AT&T’s internal tools and the customer-facing applications share the same code in the cloud.

    OpenStack and SDN doesn’t just mean the network as a whole is more resilient, since a failure in one zone doesn’t cascade to others: Donovan said the configurability in AIC also makes it much easier to design an implementation or service to meet local regulations.

    SDN also helps the company “contain security threats better,” Donovan added.

    While OpenStack is the most important of the SDN projects to AT&T’s current requirements, he said the company is also contributing to OpenNFV, OpenDaylight, and ONOS.

    Reply
  15. Tomi Engdahl says:

    Zedboard Multiport Ethernet
    http://hackaday.com/2016/01/08/zedboard-multiport-ethernet/

    The Zedboard uses Xilinx’s Zynq, which is a combination ARM CPU and FPGA. [Jeff Johnson] recently posted an excellent two-part tutorial covering using a Zedboard with multiple Ethernet ports. The lwIP (light-weight Internet Protocol) stack takes care of the software end.

    Vivado is Xilinx’s software for configuring the Zynq (among other chips), and the tutorial shows you how to use it. The Ethernet PHY is an FPGA Mezzanine Card (FMC) with four ports that is commercially available. The project uses VHDL, but there is no VHDL coding involved, just the use of canned components.

    Using AXI Ethernet Subsystem and GMII-to-RGMII in a Multi-port Ethernet design
    http://www.fpgadeveloper.com/2015/12/using-axi-ethernet-subsystem-and-gmii-to-rgmii-in-a-multi-port-ethernet-design.html

    In this two-part tutorial, we’re going to create a multi-port Ethernet design in Vivado 2015.4 using both the GMII-to-RGMII and AXI Ethernet Subsystem IP cores. We’ll then test the design on hardware by running an echo server on lwIP. Our target hardware will be the ZedBoard armed with an Ethernet FMC, which adds 4 additional Gigabit Ethernet ports to our platform.

    Reply
  16. Tomi Engdahl says:

    Jon Brodkin / Ars Technica:
    FCC says 34M people, about 10% of the US, still lack access to fixed broadband at FCC’s benchmark speed of 25Mbps down, 3Mbps up

    US fails its annual broadband deployment test at FCC
    Despite improvement, FCC says broadband still not being deployed to all in US.
    http://arstechnica.com/business/2016/01/us-fails-its-annual-broadband-deployment-test-at-fcc/

    Reply
  17. Tomi Engdahl says:

    Tom Warren / The Verge:
    Microsoft’s cellular data app for Windows 10 suggests it may be developing its own SIM card

    Microsoft is building its own SIM card for Windows
    http://www.theverge.com/2016/1/7/10734648/microsoft-sim-card-cellular-data

    Microsoft is planning to make LTE access a little easier soon, thanks to its own SIM card. The software giant is currently testing a cellular data app that lets Windows 10 devices connect to various mobile network operators without a contract. The cellular data app has been published to the company’s Windows Store, but Microsoft has not yet announced its plans for the service.

    The app is designed to work on Windows 10 and “requires a Microsoft SIM card,” according to the listing. It’s not immediately clear which markets Microsoft plans to launch its SIM card in, and the pricing of the cellular data. Microsoft is planning to sell plans through the Windows Store, so the data will be tied to a Microsoft Account.

    Reply
  18. Tomi Engdahl says:

    T-Mobile Confirms It Slows Connections to Video Sites
    http://www.wired.com/2016/01/t-mobile-confirms-it-slows-connections-to-video-sites/

    Though T-Mobile still wants to play games with words, the company has admitted it’s slowing down streams as part of its unlimited video service.

    T-Mobile customers who activate the company’s controversial Binge On video service will experience downgraded internet connection speeds when viewing videos on YouTube or other sites that don’t take part in Binge On, a T-Mobile spokesperson confirmed today. They’ll also experience slower speeds when trying to download video files for offline use from websites that do not participate in Binge On, at least until the customer deactivates the service.

    The confirmation brings clarity to questions that have swirled for weeks about Binge On. The service, which is turned on by default but can be switched off at any time, allows some T-Mobile customers to watch unlimited amounts of video from Netflix and Hulu (which are T-Mobile partners) but not YouTube (which isn’t) without having those streams count against their data plans.

    In other words, the data usage exemptions only apply to T-Mobile’s partners, and video quality is limited to 480p, the same resolution that DVDs use.

    T-Mobile has insisted that it “optimizes” videos for Binge On customers, but the EFF found that T-Mobile is actually downgrading all connections to video sites, including those that aren’t Binge On partners. As a result, users are typically served 480p versions of nearly all videos, since sites like YouTube and Netflix will automatically route customers with slow connections to the lower quality stream.

    Reply
  19. Tomi Engdahl says:

    EE Times radio show January 15: PAM4 modulation
    http://www.edn.com/electronics-blogs/rowe-s-and-columns/4441163/EE-Times-radio-show-January-15–PAM4-modulation

    Every time digital data streams take a jump in speed, new problems occur that were hidden at lower speeds. Take skin effect. It wasn’t much of an issue at 10 Mbps but at 10 Gbps, it’s a big deal. As speeds for data channels reach 25 Gpbs, issues such as the weave of PCB material can distort a signal.

    Now, 25 Gbps channels aren’t fast enough and the engineers are looking into developing 50 Gbps (really 56 Gbps) channels. Communications standards committees such as PAM4. With four voltage levels instead of two, PAM4 can send two bits per symbol using about the same bandwidth as the 1-bit NRZ signal. Unfortunately, PAM4 brings new problems. You now have three eyes in the same voltage space as opposed to one. That means receiving bits just got at least three times harder.

    Great, not only are the eyes less than one-third the height of the NRZ signal on the left, they’re not all the same shape. Plus they’re skewed in time. As a result, PAM4 is going to dominate the technical program at DesignCon 2016

    Reply
  20. Tomi Engdahl says:

    AT&T brings back unlimited data plans for its DirecTV and U-verse subscribers
    $100 for a single line, $40 extra for additional smartphones or tablets
    http://www.theverge.com/2016/1/11/10746516/att-unlimited-data-plan-pricing-directv-uverse

    Reply
  21. Tomi Engdahl says:

    Time-Sensitive Networking
    https://en.wikipedia.org/wiki/Time-Sensitive_Networking

    Time – Sensitive Networking (TSN) is a set of standards developed by the Time – Sensitive Networking Task Group[1](IEEE 802.1). The TSN Task Group was formed at November 2012 by renaming the existing Audio / Video Bridging Task Group[2] and continuing its work. The name changed as a result of extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over Ethernet networks.

    IEEE 802 Time – Sensitive Networking: Extending Beyond AVB
    https://standards.ieee.org/events/automotive/08_Teener_TSN.pdf

    Reply
  22. Tomi Engdahl says:

    Comcast deploys 100-Gigabit Ethernet back end infrastructure for NBA’s Sacramento Kings
    http://www.cablinginstall.com/articles/2016/01/comcast-kings-ethernet.html?cmpid=EnlCIMCablingNewsJanuary112016&eid=289644432&bid=1272337

    The Sacramento Kings announced on Dec. 17 that the NBA team’s Golden 1 Center will rank among the world’s most connected indoor sports and entertainment venues as the result of a new multi-year agreement with Comcast Corporation (NASDAQ: CMCSA).

    According to a press release, “Comcast Business, the business services unit of Comcast, will deliver a connectivity platform with unparalleled bandwidth [to the venue] by installing fully redundant transport facilities and two 100 Gigabit Ethernet dedicated Internet circuits. The services will provide the back end infrastructure enabling the team to provide free Wi-Fi for fans, power the Kings mobile app, and supply cloud-based voice and unified communications services for team members at the arena and at the team’s corporate offices. As a result, the Internet connection at Golden 1 Center, as well as the [attached] public plaza and Downtown Commons (DOCO), will be over 17,000 times faster than the average home Internet connection, with the ability to handle more than 225,000 Instagram photo posts per second.”

    In addition, Comcast Business will provide the in-house video feed to all television monitors throughout Golden 1 Center, allowing patrons to access ancillary programming while attending concerts and other events at the arena.

    Reply
  23. Tomi Engdahl says:

    Signal strength indicates the location

    Doctoral Tampere University of Technology, says that mobile network signal strength measurement results can be used for a variety of positioning applications. What’s more, this information is openly available in almost all radio networks and mobile devices.

    Master of Science Jukka Talvitie’s dissertation provides solutions based on the received signal strength of the radio network positioning. Signal volume is measured radio networks constantly, as it is observed, inter alia, coverage network base stations and manage important functions of the network.

    - The necessary positioning measurements are obtained as free-product of the normal operation of the network, says Jukka Talvitie.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3821:verkon-voimakkuus-kertoo-sijainnin&catid=13&Itemid=101

    Reply
  24. Tomi Engdahl says:

    Streetlight Shines on Metro IoT
    Cellular LED pole targets smart cities
    http://www.eetimes.com/document.asp?doc_id=1328658

    Bill McShane sells smart streetlights. His company, Philips, calls them “the connected city experience.”

    It’s a growing business and a signpost pointing to the future of smart cities and the emerging Internet of Things. The novel product is by turns simple, complex, nearly invisible and maybe someday omnipresent.

    Smart streetlights are essentially poles packed with LEDs, small cell LTE base stations, an optional control system for the lights and a smart meter to monitor the poles’ power use. So far McShane has about 150 of them being deployed in Los Angeles and San Jose. He believes over the next three years he potentially could sell thousands of them in towns all across America.

    The product has multiple customers and vendors. And though it has no road map, it invites many thoughts about its potential. The cellular-only poles arrive at a time when cities are also expanding metro Wi-Fi networks and thinking broadly about future sensor nets of various kinds

    Philips’ LED lighting group came up with the idea for smart streetlights and owns the product. The Dutch company struck a global deal with Ericsson to act as the subcontractor for the LTE base stations inside them. Other companies supply the optional smart meters some cities such as San Jose require.

    Cellular carriers are the primary customers, specifying and paying Ericsson for the base stations and determining where they will be placed.

    L.A., for example, charges Philips about $10,000 to install the poles and $40 a year to lease each site. San Jose will let Philips collect fees from the cellular carrier for ten years, after which the city will presumably collect any fees from operators.

    Philips helps cities create a simplified permitting process for the poles sometimes owned and serviced by different entities

    In the end, everyone makes money, McShane said. “It’s a triple win for residents, businesses and the city,”

    The technology inside Philips’ smart pole is fairly straightforward and narrowly defined. What such poles might look like in the future is an open question at a time when cities have tight budgets but are thinking in creative ways about IoT networks.

    Today’s smart pole has two configurations — a high-power pole packs three big Ericsson RU-11 or -12 60W LTE radios, the low-power version crams in six 5-10W radios that cover a narrow range of about 1,000 feet.

    L.A. requires the poles also include backup batteries in case of a power loss. San Jose requires smart meters to pay its utility,

    There’s no road map for the smart streetlights yet. “We want to focus on LTE and get that right, it’s easy to integrate Wi-Fi or other things later,”

    The concept of city-wide IoT networks is still in its infancy and raises many thorny issues, said McShane of Philips. “It’s very capital intensive to deploy this, and we have to ask how data would flow to what central hub and what about analytics — there’s a lot of public policy that needs to be developed as well,” he said.

    The chief information officer of San Jose, Vijay Sammeta, agreed. He characterized smart cities as a “wild West” where it’s time to move from demonstration projects to the more difficult job of real deployments.

    “Smart cities have reawakened the concept of R&D in government but these things all take time, money and energy so they have to be aligned to our priorities,” said Sammeta.

    For example San Jose just completed a year-long pilot project with Intel that involved installing ten air-quality sensors around the city. “We do want to take a look at more of these projects…but I picture a street light pole with so many things strapped to it its leaning over,” he said.

    L.A. ultimately plans to install 400-600 smart poles and is pondering what other sensors they might carry. The city is considering adding charging stations for electric vehicles to some of them.

    Smart meters in the poles are key to enabling future additions

    “When we want to add environmental sensors or cameras, we will be able to install them more easily without rewiring circuits,”

    “The business case based on energy savings for converting to LED streetlights is well established, but the benefits of networking (to provide traffic monitoring or air-quality sensors, and etc.) are less well founded and more complex, often crossing city departments,” said Woods, the research director at Navigant.

    “Standards will be important for everything from the light fixtures to the communication networks,” Woods said. “The industry has made some progress, but it will be interesting to see what comes out of various smart city standards initiatives that are currently underway — which in part are trying to clarify city needs as much as relevant technical standards,”

    enthusiastic about the remote monitoring units for LED street lights that Philips sells for $100-$250 a unit. L.A. is preparing to install a second-generation version with embedded GPS.

    While Philips helps carriers build out their cellular infrastructure, L.A. and San Jose are expanding their metro Wi-Fi networks.

    Reply
  25. Tomi Engdahl says:

    Report: Harsh environment fiber-optic components market hit $1.3 billion in 2015
    http://www.cablinginstall.com/articles/pt/2016/01/report-harsh-environment-fiber-optic-components-market-hit-1-3-billion-in-2015.html?cmpid=EnlCIMCablingNewsJanuary112016&eid=289644432&bid=1272337

    According to the study, the value of passive optical components, led by fiber-optic cable assemblies (glass optical fiber/GOF plus plastic optical fiber/POF), in harsh environments reached $711 million in 2015. Transmitter/receiver units, held a 43% percent share of total components consumption in 2015. The total use of fiber-optic components used in all harsh environment applications is forecast to increase at an average annual growth rate of 14.6 % from $1.3 billion in 2015 to $2.6 billion in 2020.

    The report notes that, historically, the market value of harsh environment fiber-optic components and devices has been dominated by military/aerospace-qualified components, with a 68% share in back in 2010. “However, we forecast that the military/aerospace application’s market share will decrease over the forecast period (2015-2020),” said the analyst, who added, “The commercial/industrial fiber-optic component consumption, in turn, is dominated by plastic optical fiber (POF) link components. However, glass-based optical fiber is finding an increase in opportunity in commercial/industrial applications.”

    Reply
  26. Tomi Engdahl says:

    Panduit’s small-diameter 28AWG line named best structured, physical network cabling in Asia
    http://www.cablinginstall.com/articles/pt/2016/01/panduit-asia-award.html?cmpid=EnlCIMCablingNewsJanuary112016&eid=289644432&bid=1272337

    Panduit announced that it has been awarded the NetworkWorld Asia “Readers’ Choice Award for Best Structured/Physical Network Cabling for its 28 AWG cabling solution.

    The small-diameter twisted-pair cabling solution helps optimize space in telecommunication rooms and simplifies cable management.

    “Panduit is proud to be recognized by the readers of NetworkWorld Asia for the impact of our 28 AWG cabling solution,

    Reply
  27. Tomi Engdahl says:

    Industry Voices: Lowenstein’s View–The worst acronyms in wireless
    http://www.fiercewireless.com/story/industry-voices-lowensteins-view-worst-acronyms-wireless/2016-01-04

    Late last year, in the middle of a “wireless 101″ presentation to a consumer packaged goods company, somebody stuck their hand up and asked what “EICIC” stands for. I was stumped. Or maybe I didn’t sleep well the night before. No, I was stumped. Wikipedia to the rescue. But this got me to thinking that yes, there are lots of silly and over-the-top (does OTT apply to that?) acronyms in our industry.

    The funny thing is, some of these acronyms have actually found their way into products and services we actually try to market to consumers. Like VoLTE. The PR person or carrier marketing exec who thought this would be a good idea should be fired. Try explaining what VoLTE is to someone like my spouse, who still doesn’t really understand the difference between cellular and Wi-Fi. Some acronyms make sense in the consumer world: NASA, DVR. But MIMO? RCS? What are we thinking?

    Reply
  28. Tomi Engdahl says:

    6 Wi-Fi predictions for 2016
    http://www.cablinginstall.com/articles/pt/2016/01/6-wi-fi-predictions-for-2016.html?cmpid=EnlCIMCablingNewsJanuary112016&eid=289644432&bid=1272337

    The Wi-Fi Alliance has published a list of predictions for 2016 that demonstrate Wi-Fi technology’s steady march forward, as it continues to deliver an even better user experience.

    1. Smart everything.
    The number of connected devices will reach 38.5 billion in 2020. This growth is due in part to manufacturers adding intelligence and connectivity to products not typically thought of as “high-tech,” like your Wi-Fi connected vacuum, coffee maker, door locks, or slow cooker. In 2016, companies that do not specialize in developing connectivity technologies — particularly those developing products for the Smart Home — will leverage a new Wi-Fi Alliance Implementer membership category, which will allow them to more easily deliver Wi-Fi connected products with secure operation, certified interoperability, and legacy compatibility with the 6.8 billion Wi-Fi products currently in use.

    2. Wi-Fi is in more places you want it to be.
    Seventy-three percent of Americans say it is very important to have access to Wi-Fi in their daily lives. Wi-Fi’s value is also recognized around the world, and last year we saw global Wi-Fi infrastructure investments pickup with assistance from cities and companies such as Google andFacebook. In 2016, city Wi-Fi deployments will increase, with some leveraging Wi-Fi CERTIFIED Passpoint to enable seamless roaming with other cities miles and even continents away. Sports fans will take advantage of enhanced Wi-Fi as stadiums and sports arenas

    3. Protecting unlicensed spectrum is a priority.
    Everyone understands and agrees with the need to protect the billions of Wi-Fi users. In late 2015, there was an important shift towards a collaborative model for delivering on Wi-Fi and LTE-U coexistence. In 2016, industry will make progress on a test regimen for LTE-U devices, and Wi-Fi Alliance will emerge as the premier forum for this collaboration.

    4. Wi-Fi location will emerge.
    Wi-Fi will enable a new breed of innovative applications built on location-based information. Wi-Fi location capabilities will bring users even closer to the world around them, enabling a variety of robust applications and usages — both indoors and outdoors. Retailers will be some of the first to leverage location awareness — a market that will reach $43.3 billion in four short years — and service providers will follow shortly after

    5. Wi-Fi portfolio of technologies gets even better.
    In 2016, a variety of new programs will make Wi-Fi better than ever. An update to Wi-Fi CERTIFIED ac technology will bring new features such as Multi-user MIMO to increase performance and network capacity, taking Wi-Fi beyond the gigabit Wi-Fi speeds already supported.

    6. Wi-Fi brings even more value for carriers.
    In 2016, carrier Wi-Fi that is faster, more robust, and even easier to use and manage will emerge. While Passpoint was the first of a series of Wi-Fi Alliance programs to enable a more cellular-like experience in Wi-Fi hotspots

    Reply
  29. Tomi Engdahl says:

    John Legere / T-Mobile Issues & Insights Blog:
    T-Mobile CEO John Legere apologizes to EFF in an open letter, says company absolutely supports net neutrality, and Binge On is beneficial to consumers — Open Letter to Consumers about Binge On — Wow. What a week it has been out there about Binge On!

    Open Letter to Consumers about Binge On
    https://newsroom.t-mobile.com/issues-insights-blog/open-letter-to-consumers-about-binge-on.htm

    What is Binge On?
    Binge On is a FREE benefit given to all T-Mobile customers. It is and always has been a feature that helps you stretch your data bucket by optimizing ALL of your video for your mobile devices. It has two key parts to it:

    We use our proprietary techniques to attempt to detect all video, determine its source, identify whether it should be FREE and finally adjust all streams for a smaller/handheld device. (Most video streams come in at incredibly high resolution rates that are barely detectable by the human eye on small device screens and this is where the data in plans is wasted). The result is that the data in your bucket is stretched by delivering streamed video in DVD quality – 480p or better (whether you have a 2GB, 6GB or 10GB plan etc.) so your data lasts longer. Putting aside the 38+ services for which we provide FREE data for video through Binge On, as discussed below – this “stretching” of your data bucket is estimated to allow you to watch UP TO 3X MORE VIDEO from your data plan than before. This is a huge step forward.
    Binge On gives you FREE data that doesn’t hit your data bucket at all when you stream or watch from any of our participating video services (38 of them to date & counting).… We want to keep growing this list and already have over 50 services interested in coming on board. We don’t charge any video streaming companies to participate and every service provider is WELCOME! All the partner has to do is a minor amount of technical work to help us identify their video data reliably.

    How do I get Binge On?
    As with virtually all of our Un-carrier benefits, we immediately gave it to everyone! First we reached out to all of our customers via email and SMS message, and told them all about the new functionality that was coming their way. Then we turned it on, for everyone! So if you are a T-Mobile customer – you already have Binge On!

    We strive to default all of our customer benefits to “ON.”

    But here’s the thing, and this is one of the reasons that Binge On is a VERY “pro” net neutrality capability — you can turn it on and off in your MyTMobile account – whenever you want. Turn it on and off at will. Customers are in control. Not T-Mobile. Not content providers. Customers. At all times.

    Stream in HD for a movie, and go back to stretching your data bucket when you are done. It is 100% up to you – the consumer.

    As I mentioned last week, we look forward to sitting down and talking with the EFF and that is a step we will definitely take.

    Reply
  30. Tomi Engdahl says:

    Connecting the telecommunication networks in Finland and Germany the data cable to the bottom of the Baltic Sea installation is now complete. Commissioning tests will begin immediately.

    Next in the program are the introduction of testing, after which the new communication link is desirable, for commercial use in the spring.

    Finnish Cinia states planned and taken to the submarine cable of the shortest and fastest route to Central European telecommunications nodes for Northern Europe lends itself to data center environments as well as in Asia and the easternmost European markets.

    The cable is expected to have a significant impact on data centers and data communications intensive business interest to invest in its operations in Finland.

    Source: http://www.tivi.fi/Kaikki_uutiset/nyt-vilistaa-bitti-syvalla-merikaapeli-saksaan-valmistui-6244426

    Reply
  31. Tomi Engdahl says:

    Verizon Accused of Helping Spammers By Routing Millions of Stolen IP Addresses
    http://tech.slashdot.org/story/16/01/12/2322244/verizon-accused-of-helping-spammers-by-routing-millions-of-stolen-ip-addresses

    Spamhaus, an international non-profit organization that hunts down spammers, is accusing Verizon of indifference and facilitation of cybercrime because it failed for the past six months to take down stolen IP routes hosted on its network from where spam emails originated. Spamhaus detected over 4 million IP addresses, mainly stolen from China and Korea, and routed on Verizon’s servers with forged paperwork.

    Verizon Accused of Helping Cybercriminals by Routing Millions of Stolen IP Addresses
    http://news.softpedia.com/news/verizon-accused-of-helping-cybercriminals-by-routing-millions-of-stolen-ip-addresses-498819.shtml

    Verizon has some explaining to do because a recent report from The Spamhaus Project has pointed the finger at the company and accused it of aiding cybercriminals by routing over four million IP addresses through its network.

    The Spamhaus Project is an international non-profit organization that in the last years has maintained a spam blacklist and also collaborated with law enforcement agencies to track down spammers and some of the Internet’s spam operations.

    As Spamhaus representative Barry Branagh explains, the recent depletion of the IPv4 address block has forced cybercriminals to steal IP ranges from the IP pools of companies that don’t use them, or haven’t gotten around to setting up routes for those IPs.

    “Setting up a route” is when an ISP tells other ISPs that a particular IP address block can be found on its servers. While spammers have found it quite easy to steal or buy IP blocks from the black market, to set up a route, they usually need to register as an AS (Autonomous System) and receive an ASN (Autonomous System Number).

    Verizon doesn’t vet ASs that want to route IP addresses on its servers

    Because of Verizon’s relaxed ASN setup process, cybercriminals have found it quite easy to submit forged documents to the company and have it route their stolen IP lots through their servers.

    Using this approach, Mr. Branagh says that over 4 million IP addresses have been routed through Verizon’s network, which were later used to spam users via the “snowshoe approach.” With this technique, spammers use multiple addresses, in various locations, to send spam email to their victims.

    Reply
  32. Tomi Engdahl says:

    Is copper dead?
    http://www.edn.com/electronics-blogs/designcon-central-/4441191/Is-copper-dead-?_mc=NL_EDN_EDT_EDN_analog_20160114&cid=NL_EDN_EDT_EDN_analog_20160114&elq=c6b0e310100242fb81f7a412e9bf3c16&elqCampaignId=26511&elqaid=30316&elqat=1&elqTrackId=af93009a3436474c81a5c44344bec463

    “The reports of my death have been greatly exaggerated,” could have just as well been said about the use of copper interconnects as Mark Twain said about himself, in a letter when he had been confused with someone else who had actually died.

    For as long as I’ve been coming to a DesignCon—and I attended the first one—I have heard it said, “surely we can’t go that fast in copper, we have to switch to optical interconnects.” When we were at 1 Gbps, this was said about 2.5 Gbps. When we were at 2.5 Gbps, this was said about 5 Gbps and every other generation after.

    Now we hear it being said about 56 Gbps. Is this another case of crying wolf, or have we really reached some fundamental limits to copper interconnect technology?

    perspectives on the transition from copper to optical interconnects. Is it 28 Gbps, 56 Gbps or can we still do copper at 112 Gbps? Or beyond?

    Taking a closer look at PCB traces
    http://www.edn.com/design/analog/4441192/Taking-a-closer-look-at-PCB-traces?_mc=NL_EDN_EDT_EDN_analog_20160114&cid=NL_EDN_EDT_EDN_analog_20160114&elq=c6b0e310100242fb81f7a412e9bf3c16&elqCampaignId=26511&elqaid=30316&elqat=1&elqTrackId=1b9fd03786534fc99a2de240a800b4ba

    A frequently asked question during Printed Circuit Board (PCB) layout review meetings is, “Are 50-ohm traces being used for the digital signals in this PCB layout?” Often the answer to this question is “yes”. However when making decisions that balance cost, performance and manufacturability the correct answer can also be “no” or “not for all the digital signals.” Alternative approaches can include focusing on the “controlled impedance” of PCB transmission lines and/or using other trace-impedance values.

    The 100-ohm differential-pair is usually determined prior to the single-ended and should be fitted in the routing channel (between the vias) without discontinuities because they are usually for higher speed digital signals. Once the trace width and spacing of the 100-ohm differential-pair have been designed, the trace width for 50-ohm or 60-ohm single-ended on the same layer is usually determined accordingly.

    Reference designs are an essential part of making PCB design decisions. However it is important to have a deep understanding of the principles and limitations behind the techniques being applied in the reference designs. Only then can optimal design trade-off decisions be made.

    Reply
  33. Tomi Engdahl says:

    Analyzing closed eyes: CRJ and CDJ
    http://www.edn.com/electronics-blogs/eye-on-standards/4441189/Analyzing-closed-eyes–CRJ-and-CDJ?_mc=NL_EDN_EDT_EDN_analog_20160114&cid=NL_EDN_EDT_EDN_analog_20160114&elq=c6b0e310100242fb81f7a412e9bf3c16&elqCampaignId=26511&elqaid=30316&elqat=1&elqTrackId=7a6b4ff178174124943cf402f780e10e

    ISI (inter-symbol interference) closes eye diagrams at high data rates. The skin effect and dispersion—the frequency dependence of the dielectric “constant”—combine to cause the messy low-pass nature of channels, or equivalently, in the time domain, to smear the pulse response across many bits. As data rates increase and higher frequency components are introduced, ISI gets worse.

    The combination of one, two, or three types of equalization open the eyes enough for the decision circuit to identify symbols—FFE (feed forward equalization) at the transmitter, CTLE (continuous time equalization) at the receive, and/or DFE (decision feedback equalization) also at the receiver. For NRZ (non-return to zero) PAM2 (2-level pulse amplitude modulation) signals, there’s just the one eye with the baseband symbol for 1s high and 0s low. For PAM4 (4-level pulse amplitude modulation) there are three eyes, one separating each of the four symbol levels, since PAM4 encodes two bits in each symbol. In both cases, at high rates, the signal that enters the receiver has closed eyes.

    Because ISI closes the eyes, the trick to analyzing them is a combination of “embedding” the receiver equalization scheme(s) and using carefully chosen test patterns. The test patterns are chosen to control ISI.

    Emerging 56 Gbit/s specifications offer new approaches to gauging signal impairments independent of ISI.

    Start with a square wave clock-like signal, 1010… for NRZ-PAM2 or (11)(00)(11)(00) for PAM4, and measure CRJrms (rms clock random jitter), and CDJpp (peak-to-peak clock deterministic jitter). Since there’s no signal on the data, no symbols to interfere, CRJrms and CDJpp measure RJ (random jitter) and DJ (deterministic jitter) independent of ISI, but retain the random noise and other uncorrelated impairments like crosstalk, PJ (periodic jitter), and any other EMI (electromagnetic interference). Apply the transmitter equalization scheme, if there is one, and embed the receiver CTLE and/or DFE as appropriate so that the square waveform also includes the equalized version of all the signal impairments that are left over.

    The dual-Dirac model is usually used to estimate TJ(BER), the total jitter defined at a BER (bit-error ratio) or, equivalently, the eye width at a BER

    Reply
  34. Tomi Engdahl says:

    The fundamentals of PAM4
    http://www.edn.com/design/systems-design/4441212/The-fundamentals-of-PAM4

    inShare2
    Save Follow
    PRINT
    PDF
    EMAIL
    As our society’s hunger for data grows—not only more data, but more data delivered faster—older modulation schemes based on NRZ-type encoding grow increasingly inadequate. We need to get data from point A to point B as efficiently as possible, whether that means between chips on a PC board or from one end of a long-haul optical fiber to the other. A modulation scheme that’s gaining favor in many quarters is PAM4, and in this post we’ll look at the basics of PAM4 before turning to the test and analysis challenges it poses.

    What else might we do to double the bit rate? One approach is to serialize the two bit streams. Instead of two 28-Gb/s channels, we create one 56-Gb/s channel. As a result, in the same period in which we had one bit transmitted at 28 Gb/s, we now have two bits transmitted at 56 Gb/s.

    We need a way to double the bit rate in the channel without doubling the required bandwidth, and that’s where PAM4 enters the picture. PAM4 takes the L (Least Significant Bit) signal, divides it in half, and adds it to the M (Most Significant Bit) signal. The result is four signal levels instead of two, with each signal level corresponding to a two-bit symbol.

    An eye diagram for a PAM4 signal is unusual, with three eye openings and four levels stacked vertically

    However, the opening of each of these three eyes is A/3. For bandwidth requirement, we roll back to 1/T. Thus, this signal, which moves 56 Gb/s, does so using the same amount of bandwidth as did the ML signal that moved 28 Gb/s. But with the SNR related to A/3, we find that our M+L/2 signal is three times m.

    We have, in effect, traded off SNR for bandwidth. Many serial links are bandwidth-constrained, as it’s difficult to move much more than 28 Gb/s over any length of copper. But when you have some SNR headroom, it may well pay off to consider a PAM4 modulation scheme.

    Reply
  35. Tomi Engdahl says:

    Mobile phone security unified

    Nordic communications authorities have issued their first joint mobile networks to improve the safety recommendation. The Recommendation aims to raise part of the security already obsolete implementation.

    Nordic communications authorities aim to bring mobile communications and integrated network security levels in all the Nordic countries. Especially since in recent years has raised a number of mobile telephony SS7 signaling traffic potential for abuse.

    Signalling transport function is to enable, inter alia, forwarding calls and other messages to the telecommunications operator’s own network, as well as within and between different telecom operators networks.

    Use of the telephone network, the many technical solutions were designed in the 1970s and 80s, so safety deficiencies due to the long life cycle. Irregularity Opportunities originally related to a closed network between telecom companies trust-based approach to exploitation.

    Nordic communications authorities have now, telecommunications companies through a joint recommendation for a first-time whose purpose is to prevent the SS7 signaling traffic abuse potential

    Source: http://www.uusiteknologia.fi/2016/01/15/kannykoiden-tietoturvaa-yhtenaistetaan/

    Reply
  36. Tomi Engdahl says:

    World Bank Says Internet Technology May Widen Inequality
    http://news.slashdot.org/story/16/01/15/027213/world-bank-says-internet-technology-may-widen-inequality

    Somini Sengupta writes in the NY Times that a new report from the World Bank concludes that the vast changes wrought by Internet technology have not expanded economic opportunities or improved access to basic public services but stand to widen inequalities and even hasten the hollowing out of middle-class employment. “Digital technologies are spreading rapidly, but digital dividends — growth, jobs and services — have lagged behind,”

    Internet Yields Uneven Dividends and May Widen Inequality, Report Says
    http://www.nytimes.com/2016/01/14/world/asia/internet-yields-uneven-dividends-and-may-widen-inequality-report-says.html?_r=0

    “Digital technologies are spreading rapidly, but digital dividends — growth, jobs and services — have lagged behind,” the bank said in a news release announcing the report.

    Those who are already well-off and well-educated have been able to take advantage of the Internet economy, the report concluded pointedly, and despite the expansion of Internet access, 60 percent of humanity remains offline.

    China has the largest number of Internet users, followed by the United States and India, according to the report.

    The bank’s findings come at a time when the technology industry — which sometimes tends to see itself as the solver of the world’s greatest problems — has been rushing to expand Internet access through a variety of new means. Google, through its Project Loon, aims to use a constellation of balloons to beam down wireless signals to places that lack connectivity. Facebook has offered a limited sphere of the World Wide Web for users in some developing countries — and in turn, has come under intense criticism, especially in India.

    “Countries that are investing in both digital technology and its analog complements will reap significant dividends, while others are likely to fall behind” the report added. “Technology without a strong foundation risks creating divergent economic fortunes, higher inequality and an intrusive state.”

    How a society takes advantage of information technology depends on what kind of a society it is, the report concluded.

    The bank, which says it has itself invested $12.6 billion in information technology projects, calls on countries to make the Internet “universal, affordable, open and safe.” Yet it also takes pains to say that expanding access will not be enough for citizens to take advantage of the benefits

    “The triple complements — a favorable business climate, strong human capital and good governance — will sound familiar — and they should because they are the foundation of economic development,”

    Reply
  37. Tomi Engdahl says:

    Service Provider Builds National Network of Unmanned Data Centers
    http://hardware.slashdot.org/story/16/01/14/2146215/service-provider-builds-national-network-of-unmanned-data-centers

    Colocation and content delivery specialist EdgeConneX is operating unmanned “lights out” data centers in 20 markets across the United States, marking the most ambitious use to date of automation to streamline data center operations. While some companies have operated prototypes of “lights out” unmanned facilities (including AOL) or deployed unmanned containers with server gear, EdgeConneX built its broader deployment strategy around a lean operations model.

    Scaling Up the Lights Out Data Center
    http://datacenterfrontier.com/lights-out-data-center-edgeconnex/

    The “lights out” server farm has been living large in the imaginations of data center futurists. It’s been 10 years since HP first made headlines with its vision of unmanned data centers, filled with computers that monitor and manage themselves. Even Dilbert has had sport with the notion.

    But the list of those who have successfully implemented lights out data centers is much shorter. HP still has humans staffing its consolidated data centers, although it has used automation to expand their reach (each HP admin now manages 200 servers, compared to an initial 1-to-15 ratio). In 2011, AOL announced that it had implemented a small unmanned data center, but that doesn’t appear to have progressed beyond a pilot project.

    EdgeConneX is changing that. The company has pursued a lights out operations model in building out its network of 24 data centers across the United States and Europe. EdgeConneX, which specializes in content distribution in second-tier markets, designs its facilities to operate without full-time staff on site, using sophisticated monitoring and remote hands when on-site service is needed.

    The EdgeConneX design is perhaps the most ambitious example yet of the use of automation to streamline data center operations, and using design as a tool to alter the economics of a business model.

    The Deployment Template as Secret Sauce

    The key to this approach is an advanced design and operations template that allows EdgeConneX to rapidly retrofit existing buildings into data centers with Tier III redundancy that can support high-density workloads of more than 20kW per cabinet. This allowed the company to deploy 18 new data centers in 2014.

    A lean operations model was baked into the equation from the beginning

    “Our primary build is a 2 to 4 megawatt data center and about 10,000 square feet,” said Lawson-Shanks. ” We always build with a view that we’ll have to expand. We always have an anchor tenant before we go to market.”

    That anchor is usually a cable multi-system operator (MSO) like Comcast or Liberty Global,

    Solving the Netflix Dilemma

    “We’re helping the cable companies solve a problem: to get Netflix and YouTube off their backbones,” said Lawson-Shanks. “The network is being overwhelmed with content, especially rich media. The edge is growing faster than you can possibly imagine.”

    Data center site selection is extremely important in the EdgeConneX model. In each new market, the company does extensive research of local network and telecom infrastructure, seeking to identify an existing building that can support its deployment template.

    “This is a patented operations management system and pricing model that makes every Edge Data Center a consistent experience for our customers nationwide,”

    Managing Infrastructure from Afar

    The lynchpin of the lights out approach is data center infrastructure management (DCIM) doftware. EdgeConneX uses a patented data center operating system called EdgeOS to monitor and manage its facilities. The company has the ability to remotely control the generators and UPS systems at each data center.

    EdgeConneX facilities are managed from a central network operations center in Santa Clara, with backup provided by INOC

    Currently 20 of the 24 EdgeConnex data centers are unmanned. Each facility has a multi-stage security system that uses biometrics, PIN and keycard access, with secured corridors (“mantraps”) and video surveillance.

    EdgeConneX expects to be building data centers for some time to come. Demand for edge-based content caching is growing fast

    “The user experiences and devices are changing,” he said. “But fundamentally, it’s latency, latency, latency.”

    Much of this technology wasn’t in the mix in 2005 when the first visions emerged of an unmanned data center. But as we see edge data centers proliferate, the EdgeConneX model has demonstrated the possibility of using automation to approach these facilities differently. This approach won’t be appropriate for many types of workloads, as most data centers in second-tier and third-tier markets will serve local businesses with compliance mandates that require high-touch service from trained staff.

    But one thing is certain: The unmanned “lights out” data center is no longer a science project or flight of fancy. In 20 cities across America, it’s delivering Netflix and YouTube videos to your devices.

    Reply
  38. Tomi Engdahl says:

    AT&T chooses Ubuntu Linux instead of Microsoft Windows
    http://betanews.com/2016/01/13/att-chooses-ubuntu-linux-instead-of-microsoft-windows/

    While Linux’s share of the desktop pie is still virtually nonexistent, it owns two arguably more important markets — servers and smartphones. As PC sales decline dramatically, Android phones are continually a runaway market share leader. In other words, fewer people are buying Windows computers — and likely spending less time using them — while everyone and their mother are glued to their phones. And those phones are most likely powered by the Linux kernel.

    AT&T
    has partnered with Canonical to utilize Ubuntu for cloud, network, and enterprise applications. That’s right, AT&T did not choose Microsoft’s Windows when exploring options. Canonical will provide continued engineering support too.

    “By tapping into the latest technologies and open principles, AT&T’s network of the future will deliver what our customers want, when they want it. We’re reinventing how we scale by becoming simpler and modular, similar to how applications have evolved in cloud data centers. Open source and OpenStack innovations represent a unique opportunity to meet these requirements and Canonical’s cloud and open source expertise make them a good choice for AT&T”, says Toby Ford, Assistant Vice President of Cloud Technology, Strategy and Planning at AT&T.

    John Zannos, Vice President of Cloud Alliances and Business Development at Canonical explains, “this is important for Canonical. AT&T’s scalable and open future network utilizes the best of Canonical innovation. AT&T selecting us to support its effort in cloud, enterprise applications and the network provides the opportunity to innovate with AT&T around the next generation of the software-centric network and cloud solutions. Ubuntu is the Operating System of the Cloud and this relationship allows us to bring our engineering expertise around Ubuntu, cloud and open source to AT&T”.

    AT&T selects Ubuntu for cloud and enterprise applications
    http://insights.ubuntu.com/?p=31292

    AT&T has selected Canonical to be part of its effort to drive innovation in the network and cloud. Canonical will provide the Ubuntu OS and engineering support for AT&T’s cloud, network and enterprise applications. AT&T chose Ubuntu based on its demonstrated innovation, and performance as the leading platform for scale-out workloads and cloud.

    “By tapping into the latest technologies and open principles, AT&T’s network of the future will deliver what our customers want, when they want it,” said Toby Ford, Assistant Vice President of Cloud Technology, Strategy and Planning at AT&T. “We’re reinventing how we scale by becoming simpler and modular, similar to how applications have evolved in cloud data centers. Open source and OpenStack innovations represent a unique opportunity to meet these requirements and Canonical’s cloud and open source expertise make them a good choice for AT&T.”

    About Canonical:

    Canonical is the company behind Ubuntu, the leading OS for cloud, scale-out and ARM-based hyperscale computing featuring the fastest, most secure hypervisors, as well as the latest in container technology with LXC and Docker. Ubuntu is also the world’s most popular operating system for OpenStack. Over 80% of the large-scale OpenStack deployments today are on Ubuntu.

    About AT&T:

    AT&T Inc. (NYSE:T) helps millions around the globe connect with leading entertainment, mobile, high speed Internet and voice services. We’re the world’s largest provider of pay TV. We have TV customers in the U.S. and 11 Latin American countries.

    Reply
  39. Tomi Engdahl says:

    Facebook is no charity, and the ‘free’ in Free Basics comes at a price
    Why would India reject it? Here’s why…
    http://www.theregister.co.uk/2016/01/18/facebook_is_no_charity_and_the_free_in_free_basics_comes_at_a_price/

    Who could possibly be against free internet access? This is the question Mark Zuckerberg asks in a piece for the Times of India in which he claims Facebook’s Free Basics service “protects net neutrality”.

    Free Basics is the rebranded Internet.org, a Facebook operation where by partnering with local telecoms firms in the developing world the firm offers free internet access – limited only to Facebook, Facebook-owned WhatsApp, and a few other carefully selected sites and services.

    Zuckerberg was responding to the strong backlash that Free Basics has faced in India, where the country’s Telecom Regulatory Authority recently pulled the plug on the operation while it debates whether telecoms operators should be allowed to offer different services with variable pricing, or whether a principle of network neutrality should be enforced.

    Not content to await the regulator’s verdict, Facebook has come out swinging. It has paid for billboards, full-page newspaper ads, and television ad campaigns to try to enforce the point that Free Basics is good for India’s poor.

    In his Times piece, Zuckerberg goes one step further – implying that those opposing Free Basics are actually hurting the poor. He argued that “for every ten people connected to the internet, roughly one is lifted out of poverty”.

    Without reference to supporting research, he instead offers an anecdote about a farmer called Ganesh from Maharashtra state.

    This is not a charity

    First, despite his claims to the contrary, Free Basics clearly runs against the idea of net neutrality by offering access to some sites and not others. While the service is claimed to be open to any app, site or service, in practice the submission guidelines forbid JavaScript, video, large images, and Flash, and effectively rule out secure connections using HTTPS.

    This means that Free Basics is able to read all data passing through the platform. The same rules don’t apply to Facebook itself, ensuring that it can be the only social network, and (Facebook-owned) WhatsApp the only messaging service, provided.

    Yes, Free Basics is free. But how appealing is a taxi company that will only take you to certain destinations, or an electricity provider that will only power certain home electrical devices? There are alternative models: in Bangladesh, Grameenphone gives users free data after they watch an advert. In some African countries, users get free data after buying a handset.

    Second, there is no convincing body of peer-reviewed evidence to suggest internet access lifts the world’s poor out of poverty.

    Should we really base telecoms policy on an anecdote and a self-serving industry report sponsored by the firm that stands to benefit?

    India has a literacy rate of 74 per cent, of which a much smaller proportion speak English well enough to read it. Literate English speakers and readers tend not to be India’s poorest citizens

    Poverty consists of more than just no internet

    India will not always have low levels of internet access, this is not the issue – in fact Indian internet penetration growth rates are relatively high. Instead, the company sees Free Basics as a means to establish a bridgehead into the country, establishing a monopoly before other firms move in.

    In presenting Free Basics as an act of altruism Zuckerberg tries to silence criticism. “Who could possibly be against this?”, he asks:

    What reason is there for denying people free access to vital services for communication, education, healthcare, employment, farming and women’s rights?

    That is the right question, but Free Basics is the wrong answer.

    Reply
  40. Tomi Engdahl says:

    BBC:
    BT takeover of EE gets final Competition and Markets Authority clearance
    http://www.bbc.com/news/business-35320831

    BT Group’s takeover of mobile phone network EE has been given final clearance by the Competition and Markets Authority (CMA).

    The £12.5bn deal brings together the UK’s largest fixed-line business and the largest mobile telecoms business.

    The CMA said it was unlikely to harm competition as BT was “smaller in mobile” and EE a “minor player” in broadband.

    But rival Vodafone said it still had “wider market concerns”.

    The deal creates a communications giant covering fixed-line phones, broadband, mobile and TV.

    Reply
  41. Tomi Engdahl says:

    Next-generation Fibre Channel speeds demand high-performance cabling
    http://www.cablinginstall.com/articles/print/volume-23/issue-12/features/data-center/next-generation-fibre-channel-speeds-demand-high-performance-cabling.html?cmpid=Enl_CIM_Standards_January132016&eid=289644432&bid=1277333

    Fibre Channel, the protocol that has served storage networks for decades and is as vital as ever to modern computing environments, continues to demand every bit as much performance out of a cabling infrastructure as do the Ethernet variants that are prominent in data center local area networks. Today 32- and 128-Gbit/sec speeds are in focus for Fibre Channel.

    Fibre Channel specifications are produced by the International Committee for Information Technology Standards (INCITS; http://www.incits.org) and, in particular, that organization’s T11 Fibre Channel Interfaces committee/T11.2 Physical Variants Task Group. The Fibre Channel Industry Association (FCIA; http://www.fibrechannel.org) is a separate organization from INCITS, but the FCIA group produces much of the publicly available information about Fibre Channel technology and standards development.

    Concerning the optical media that can support Gen 6 speeds, Kamino said Om2 multimode can support distances up to 20 meters; Om3 multimode can support up to70 meters; and Om4 can support a reach of 100 meters. Singlemode fiber can support 10-kilometer distances of 32GFC.

    Reply
  42. Tomi Engdahl says:

    IEEE 802.3 forms study groups for 25, 50 and 100/200 Gbit/sec Ethernet
    November 25, 2015
    http://www.cablinginstall.com/articles/2015/11/ieee-8023-study-groups-25-50-100-200-gbe.html?cmpid=Enl_CIM_Standards_January132016&eid=289644432&bid=1277333

    The Ethernet Alliance recently commended the decision to form three study groups within IEEE 802.3. The groups are exploring the development of standards for 25-Gbit/sec Ethernet over singlemode fiber, 50-Gibt/sec Ethernet over a single lane, and next-generation 100- and 200-Gbit/sec Ethernet. The Ethernet Alliance commented that by forming these groups, IEEE 802.3 is “demonstrating Ethernet’s capacity for dynamically addressing the changing needs of its rapidly expanding marketplace.”

    Scott Kipp, Ethernet Alliance president and principal technologist for Brocade, stated, “Ethernet is beginning the standardization of a new era of speeds based on 50-Gbit/sec signaling technology. The 50-Gbit/sec lanes will enable 50 Gigabit Ethernet SFP56 modules, and 200GbE QSFP56 modules and other corresponding technologies as we have shown in the 2015 Ethernet Roadmap.

    Reply
  43. Tomi Engdahl says:

    Standard for 16- and 32-fiber connector interface taking shape
    http://www.cablinginstall.com/articles/print/volume-23/issue-9/features/standards/standard-for-16-and-32-fiber-connector-interface-taking-shape.html?cmpid=Enl_CIM_Standards_January132016&eid=289644432&bid=1277333

    The TR-42.13 committee of the Telecommunications Industry Association, which focuses on passive optical devices and fiber-optic metrology, continues to make progress on the development of a specification for a 16- and 32-fiber array connector. The interface will be defined within a FOCIS document (Fiber Optic Connector Intermateability Standard), which establishes a connector’s physical characteristics. FOCIS 18 will define the 16- and 32-fiber connector. The fibers will be arranged in either one or two rows of 16.

    As we reported six months ago (“Parallel-optics technology evolving with higher-speed transmission,” March 2015), indications are that the IEEE’s 400-Gbit Ethernet specification 400GBase-SR16 will use 16 lanes of 25-Gbit/sec transmission, as well as 16 lanes of 25-Gbit/sec reception, over OM4 fiber to a distance of 100 meters. The development of 16- and 32-fiber connectors is a natural fit for the 400G application.

    Reply
  44. Tomi Engdahl says:

    Industrial cabling standards to address higher speeds, fewer wires, and longer distances
    December 1, 2015
    http://www.cablinginstall.com/articles/print/volume-23/issue-12/features/design/industrial-cabling-standards-to-address-higher-speeds-fewer-wires-and-longer-distances.html?cmpid=Enl_CIM_Standards_January132016&eid=289644432&bid=1277333

    The Telecommunications Industry Association’s (TIA) standard-development committee TR-42 Telecommunications Cabling Systems Engineering Committee includes several subcommittees, one of which is TR-42.9, Industrial Telecommunications Infrastructure. That subcommittee originally produced and has revised the ANSI/TIA-1005 Telecommunications Infrastructure Standard for Industrial Premises. Most recently, the standard was revised to its “A” version, ANSI/TIA-1005-A, in 2012.

    1 Gig over 1 pair

    Specifications for a single-pair cabling system that will support speeds of 1 Gbit/sec for distances up to 40 meters will be included in an addendum to ANSI/TIA-1005-A. TR-42.9 is developing the addendum in parallel with the Institute of Electrical and Electronics Engineers’ (IEEE; http://www.ieee.org) 802.3bp 1000Base-T1 PHY Task Force. The 802.3bp project’s objectives were updated most recently in July 2014. Among the objectives are, “Support 1 Gbit/sec operation in automotive and industrial environments (e.g. EMC [electromagnetic compatibility], temperature),” and, “Define the performance characteristics of optional link segment(s) … for industrial controls and/or automation, transportation (aircraft, railway, bus and heavy trucks) applications with a goal of at least 40-meter reach.”

    Lounsbury said a one-pair cable that will support gigabit speeds to at least 40 meters is likely to be a shielded cable with low loss. “Although probably not much less expensive than a four-pair gigabit-capable cable-there are economies of scale at work when manufacturers produce four-pair cable that will not apply to a one-pair cable-the one-pair cable will be easier to terminate and will take up less space in ductways,”
    Additionally, Lounsbury noted, the electronic devices to which one-pair cabling will connect, will be simplified.

    A single-pair rather than four-pair connection is advantageous. “If we move from the RJ45 to a robust, sealed single-pair connector, it helps to shrink the product,”

    When 802.3bp was in its initial stages of gaining project approval, the task force’s chair, Steve Carlson, contributed a post to the Ethernet Alliance Blog (www.ethernetalliance.org/blog) titled “Single Twisted Pair: The Next Frontier.” In the post, Carlson explained, “Not all networks need to operate at the fastest possible speed. Speed is related to the needs of the application, and many applications are well served by what would be considered low-speed links. The IEEE P802.3bp 1000Base-T1 PHY Task Force is developing a standard for full-duplex operation at 1-Gbit/sec over a single twisted copper wire pair …”

    “It’s likely that the 1000Base-T1 PHY will find homes in a wide variety of m2M (machine-to-machine) and ‘Internet of Things’ devices, especially when combined with IEEE 802.3bu Power over Data Lines Task Force that will deliver DC power over the same twisted wire pair.”

    Yes, it will be possible to send both gigabit-speed data and power over a single twisted pair when the IEEE’s 802.3bp and 802.3bu specifications are completed.

    proposal to add a study group that would focus on transmitting data at 10 Mbits/sec over a single twisted pair to a length of 1 kilometer.

    Also within TIA TR-42.9, work continues on a specification for cabling to support 1-Gigabit data transmission speed over four-pair cabling in environments with elevated noise levels. While 10- and 100-Mbit/sec transmission speeds are sufficient for many industrial communication and control systems, some are constrained by a 100-Mbit/sec maximum speed.

    ODVA (www.odva.org) promulgated the 1-Gigabit, 4-pair standardization effort within TR-42.9. ODVA is a 20-year-old organization that describes its membership as “the world’s leading automation companies.

    Historically devices incorporating ODVA’s EtherNet/IP technology maxed out at 100 Mbits/sec, and gigabit speeds came into play “for large-scale, enterprise-connected and enterprise-integrated systems,” (source: “Network Infrastructure for EtherNet/IP: Introduction and Considerations,” published by ODVA). Gigabit-speed connections have been used “for switch-to-switch uplink or backbone connections, not control connections to EtherNet/IP devices. EtherNet/IP industrial control devices support 10-Mbit/sec and 100-Mbit/sec data rates. The decentralized architecture of industrial Ethernet reduces the need for large concentrations of devices and higher-than-100-Mbit/sec connections.”

    That has been true historically, but as mentioned earlier, the implementation of vision systems in industrial environments is forcing that data rate up to gigabit speeds.

    End-to-end link

    A third effort within TR-42.9 seeks to define what is being called end-to-end link; the effort corresponds to work being carried out within the International Electrotechnical Commission (IEC) SC 25 (Interconnection of Information Technology Equipment) Working Group 3 (Customer Premises Cabling).

    That group’s work ultimately will be published as ISO/IEC 11801-99-02; the term “end-to-end link” is often abbreviated “E2E link.” Oversimplifying the effort from a technical standpoint, the E2E link initiative will include the plugs at each end of a channel. Channels as defined in TIA-568 and ISO/IEC 11801 specifications exclude the plugs at the ends of the channel’s equipment and work-area cords.

    “The document is for Class D and E links with up to 5 link segments and up to 6 connections including the connections at both ends.”

    CommScope’s reference to as many as 6 connections and 5 segments in an industrial cabling circuit sheds light on the fact that these cabling runs often include more segments and connections than typically found in other network environments, such as commercial office buildings or data centers.

    Reply
  45. Tomi Engdahl says:

    America to ITU: Sort out your spectrum policy
    WRC 2015 failed, says FCC’s O’Reilly
    http://www.theregister.co.uk/2016/01/18/america_to_itu_sort_out_your_spectrum_policy/

    America is threatening to “go it alone” on spectrum policy, again.

    Speaking to think-tank New America, Federal Communications Commissioner Michael O’Reilly took a swipe at last year’s World Radiocommunication Conference and said America “and other countries … will move forward in key spectrum areas, such as 600 MHz and 28 GHz, despite decisions at WRC, and we won’t be tied to any future upper 5 GHz decisions”.

    America went to WRC with two aims: to get a global allocation of 600 MHz spectrum for broadband applications, and to kick off sharing studies in the 5 GHz and 28 GHz bands. Both initiatives failed on the floor of the conference.

    In the 600 MHz band, things turned out even worse than America had hoped: individual countries can only offer a mobile allocation in the band if neighbouring countries agree.

    America already has plans in place to auction some of that band, and networks will be deployed well before the issue comes before the WRC again in 2023, according to Australian telco newsletter Communications Day.

    The FCC considers the 5.9 GHz band as a high-stakes battle: “The upper 5 GHz is a critical component to the continued success of unlicensed spectrum use,” he told New America. “This is because it’s adjacent to the rest of the 5 GHz band, in which Wi-Fi has been incredibly successful.

    Reply
  46. Tomi Engdahl says:

    IPv6 Since 20 Years: Why IPv6 Is Better For Privacy And How To Use It Without ISP Support
    https://tlhp.cf/ipv6-without-isp/

    On 1th December 1995 was published the RFC1883 by Internet Engineering Task Force (IETF) – the beginning of new Internet Layer protocol history. More than 20 years have passed since those day and today approximately 10% of all hosts with support IPv6. Internet access (data from Google Statistics). If we look to
    the Per-Country statistics that say: IPv6 is popular in developed countries with technology power (some exceptions available):

    United States – 24.21 %
    Germany – 22.41 %
    Switzerland – 29.21 %
    Portugal – 24.09 % (funny paradox: Spain – 0.06 %)
    Greece – 19.9 %
    Belgium – fantastic 42.96 %
    Peru – 15.85 %

    Why IPv6?

    The biggest reasons:

    addresses: ~ 4 billion in IPv4 vs 3.4×10^38 in IPv6. Four billion is too small for modern Internet: websites, smartphones, IoT (Internet of Things), web-cameras and many more devices. 3.4 * 10^38 can be explained as one IPv6 address for every 1 mm^2 of the Planet Earth upside
    easy management and configuration – IPv6 networks use autoconfiguration, management of big networks is coherent. Can be used without gateways and NAT.
    IPv6 kill NAT – transparent host to host communications is easy and effective. IPv6 support NAT, but no reason to user it cause public address is very cheaply and available for everyone.
    security – IPv6 has built-in the IPSEC protocol suite. Security of IPv4 networks depends from the software, not from Internet Protocol
    innovations in IPV6: scalability, totally multicasting, network layers, mobility.
    bigger data packages: IPv4 have limit up to 64 kB, IPv6 – 4GB.
    IPv6 hosts can generate local and global addresses with Stateless Address Autoconfiguration (SLAAC). Classic works with DHCP is also supported
    single and simple control protocol – ICPMv6
    modular header of the packet

    Security and privacy in IPv6

    In my opinion the most important feature – end-to-end encryption. This technology also implemented in IPv4, but used as non-universal feature. Encryption and integration features of IPv4 VPN is default standard components of IPv6 and used in all v6-networks. IPv6 also have the tons of privacy extension – for example, your public v6 IP can be hidden without VPN , Tor or others popular tools. MITM (man in the middle attack) is more hard in v6-networks. IPv6 extensions can also prevent tracking users from ISP.

    Hardware & Software Support

    The most of modern routers support IPv6: Cisco/Linksys, Netgear, Mikrotik, Zyxel and others. If your router doesn’t support IPv6 – don’t worry, maybe the OpenWrt or DD-Wrt firmwares can help. Please check the instructions for your device. What about Operation System adn browsers? All modern and old OS support IPv6: Linux, Widows, OS X, Android, iOS, Symbian, Bada and many others.

    The most of big websites now work with IPv6: Facebook, Google, YouTube, Yahoo etc.

    If you need to know about IPv6 support in your current Internet provider (ISP), time to open this websites in a browser: (test-ipv6.com)[http://test-ipv6.com/] or ipv6test.google.com.

    Reply
  47. Tomi Engdahl says:

    Monitor to find the illegal signals

    Anritsu will continue to expand field tester arsenal. MS27101A is a spectrum of remote monitor to find and identify, for example, the illegal transmitters and signals unlicensed their own frequencies. The device is intended in particular to state on spectrum management, as well as research institutions.

    Novelty can be used in conjunction with Anritsu Vision software, when the use of a highly accurate remote monitoring solutions. It provides an identified cause radio interference patterns can be recorded spectrum history the desired range and to locate sources of interference.

    MS27101A also enables the spectrum of cross-channel monitoring and interference from, for example, the interior of the buildings.

    MS27101A includes an integrated Web server

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3851:monitori-loytaa-laittomat-signaalit&catid=13&Itemid=101

    Reply
  48. Tomi Engdahl says:

    Numbers don’t lie—it’s time to build your own router
    With more speed available and hardware that can’t adapt, DIY builds offer peak performance.
    http://arstechnica.com/gadgets/2016/01/numbers-dont-lie-its-time-to-build-your-own-router/

    I’ve noticed a trend lately. Rather than replacing a router when it literally stops working, I’ve needed to act earlier—swapping in new gear because an old router could no longer keep up with increasing Internet speeds available in the area. (Note, I am duly thankful for this problem.) As the latest example, a whole bunch of Netgear ProSafe 318G routers failed me for the last time as small businesses have upgraded from 1.5-9mbps traditional T1 connections to 50mbps coax (cable).

    Yes, coax—not fiber. Even coax has proved too much for the old ProSafe series. These devices didn’t just fail to keep up, they fell flat on their faces. Frequently, the old routers dropped speed test results from 9mbps with the old connection to 3mbps or less with the 50mbps connection. Obviously, that doesn’t fly.

    These days, the answer increasingly seems to be wireless routers. These tend to be long on slick-looking plastic and brightly colored Web interfaces but short on technical features and reliability. What’s a mercenary sysadmin to do? Well, at its core, anything with two physical network interfaces can be a router. And today, there are lots and lots of relatively fast, inexpensive, and (super important!) fully solid-state generic boxes out there.

    So, the time had finally come. Faced with aging hardware and new consumer offerings that didn’t meet my needs, I decided to build my own router. And if today’s morphing connectivity landscape leaves you in a similar position, it turns out that both the building and the build are quite fast.

    Hardware, hardware, hardware

    We’ll go through the how-to in a future piece, but today it’s important to establish why a DIY router-build may be the best option. To do that, you first need to understand today’s general landscape.

    In the consumer world, routers mostly have itty-bitty little MIPS CPUs under the hood without a whole lot of RAM (to put it mildly). These routers largely differentiate themselves from one another based on the interface: How shiny is it? How many technical features does it have? Can users figure it out easily?

    At the higher end of the SOHO market, you start seeing some smartphone-grade ARM CPUs and a lot more RAM. These routers—like the Nightgear Nighthawk series, one of which we’ll be hammering on later—feature multiple cores, higher clock speeds, and a whole lot more RAM. They also feature much higher price tags than the cheaper competition. I picked up a Linksys EA2750 for $89, but the Netgear Nighthawk X6 I got with it was nearly three times more expensive (even on holiday sale!) at $249.

    After some good old-fashioned Internet scouring and dithering, finally I took the Alibaba plunge and ordered myself a new Partaker Mini PC from Shenzhen Inctel Technology Company. After $240 for the router itself and another $48 for a 120GB Kingston SSD from Newegg, I’d spent about $40 more on the Homebrew Special than I had on the Nighthawk. Would it be worth it?

    I’ve got a botnet in my pocket, and I’m ready to rock it

    I briefly considered setting up some kind of hideous, Docker-powered monstrosity with tens of thousands of Linux containers with individual IP addresses, all clamoring for connections and/or serving up webpages. Then I came to my senses. As far as the routers are concerned, there’s no difference between maintaining connections to thousands of individual IP addresses or just to thousands of ports on the same IP address. I spent a little bit of time turning Lee Hutchinson’s favorite webserver nginx into a ridiculous Lovecraftian monster with 10,000 heads and an appetite for destruction.

    That’s the Homebrew Special flexing its crypto muscle. It has an OpenVPN server running. For that test, the WAN-side server, Menhir, is connected to the router’s on-board OpenVPN server.

    In the name of thoroughness, we should observe one shared limitation, something by all the consumer network gear I’ve ever managed: the desire to reboot after almost any change. Some of those reboots take well over a minute. I haven’t got the foggiest idea why, but whatever the reason, the Homebrew Special isn’t afflicted with this industry standard. You make a change, you apply it, you’re done. And if you do need to reboot the Special? It’s up again in 12 seconds. (I timed it by counting dropped pings.)

    Reply
  49. Tomi Engdahl says:

    Devindra Hardawar / Engadget:
    LinkNYC free public gigabit WiFi goes live for beta testing, shows speeds of 280 Mbps down, 317 Mbps up, plans to have 500 kiosks running by July, 4450 by 2020 — LinkNYC’s free gigabit Wi-Fi is here, and it is glorious — I’m standing on the corner of 15th Street and Third Avenue in New York City, and I’m freezing.

    LinkNYC’s free gigabit Wi-Fi is here, and it is glorious
    It’s so fast it’ll make you hate your ISP.
    http://www.engadget.com/2016/01/19/linknyc-gigabit-wifi-hands-on/

    I’m standing on the corner of 15th Street and Third Avenue in New York City, and I’m freezing. But my iPhone is on fire. After connecting to one of LinkNYC’s gigabit wireless hotspots, the futuristic payphone replacements that went live for beta testing this morning, I’m seeing download speeds of 280 Mbps and upload speeds of 317 Mbps (based on Speedtest’s benchmark). To put it in perspective, that’s around ten times the speed of the average American home internet connection (which now sits at 31 Mbps). And to top it all off, LinkNYC doesn’t cost you a thing.

    Of course, my experience was a charmed one. The LinkNYC hotspot (or Link, for short) I tested went live just a few hours before I arrived, and there were only a handful of other people on it. I was also standing right in front of it — speeds will vary depending on your distance, as well as the type of device you’re using. Still, seeing Speedtest’s bandwidth meter rocket to 300 Mbps was astonishing.

    LinkNYC is the culmination of New York City’s big push to modernize payphones. Its tall and thin kiosks feature gigabit fiber connections (yes, most are plugged right into fiber lines) delivered via 802.11ac WiFi, USB charging, free voice calls, and a tablet for internet access, maps and directions. They also sport two large screens which display ads (the only way LinkNYC is making money, at the moment), as well as public service announcements. Wireless range clocks in at 150 feet, though as always, that may be affected by obstacles, windows and walls.

    The process was a bit more complex for my iPhone, since I wanted to test out LinkNYC’s encrypted offering. After receiving the prompt to install the additional security key, I had to manually approve it and restart my WiFi before my phone phone would hop onto the private network.

    When it comes to security, LinkNYC won’t be able to detect anything you’re doing on its private network. On its public network, though, it’ll be able to track your MAC address, browser and pages you’ve viewed (no different than most other WiFi providers).

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*