Telecom and networking trends for 2017

It’s always interesting (and dangerous) to lay out some predictions for the future of technology, so here are a few visions:

The exponential growth of broadband data is driving wireless (and wired) communications systems to more effectively use existing bandwidth. Mobile data traffic continues to grow, driven both by increased smartphone subscriptions and a continued increase in average data volume per subscription, fueled primarily by more viewing of video content. Ericsson forecasts mobile video traffic to grow by around 50% annually through 2022, to account for nearly 75% of all mobile data traffic. Social networking is the second biggest data traffic type. To make effective use of the wireless channel, system operators are moving toward massive-MIMO, multi-antenna systems that transmit multiple wide-bandwidth data streams—geometrically adding to system complexity and power consumption. Total mobile data traffic is expected to grow at 45% CAGR to 2020.

5G cellular technology is still in development, and is far from ready in 2017. As international groups set 2020 deadline to agree on frequencies and standards for the new equipment, anything before that is pre-standard. Expect to see many 5G announcements that might not be what 5G will actually be when standard is ready. The boldest statement is that Nokia & KT plan 2017 launch of world’s first mobile 5G network in South Korea in 2017: commercial trial system to operate in the 28GHz band. Wireless spectrum above 5 GHz will generate solutions for a massive increase in bandwidth and also for a latency of less than 1 ms.

CableLabs is working toward standardization of an AP Coordination protocol to improve In-Home WiFi as one access point (AP) for WiFi often is not enough to allow for reliable connection and ubiquitous speed to multiple devices throughout a large home. The hope is that something will be seen mid-2017. A mesh AP network is a self-healing, self-forming, self-optimizing network of mesh access points (MAPs).

There will be more and more Gigabit Internet connections in 2017. Gigabit Internet is Accelerating on All Fronts. Until recently, FTTH has been the dominant technology for gigabit. Some of the common options available now include fiber-to-the-home (FTTH), DOCSIS 3.0 and 3.1 over cable’s HFC plant, G.Fast over telco DSL networks, 5G cellular, and fiber-to-the-building coupled with point-to-point wireless. AT&T recently launched its AT&T Fiber gigabit service. Cable’s DOCSIS 3.0 and 3.1 are cheaper and less disruptive than FTTH in that they do not require a rip-and-replace of the existing outside plant. DOCSIS 3.1, which has just begun to be deployed at scale, is designed to deliver up to 10 Gbps downstream Internet speeds over existing HFC networks (most deployments to date have featured 1 Gbps speeds). G.Fast is just beginning to come online with a few deployments (typically 500 meters or less distance at MDU). 5G cellular technology is still in development, and standards for it do not yet exist. Another promising wireless technology for delivering gigabit speeds is point-to-point millimeter wave, which uses spectrum between 30 GHz and 300 GHz.

There are also some trials for 10 Gbit/s: For example Altice USA (Euronext:ATC) announced plans to build a fiber-to-the-home (FTTH) network capable of delivering broadband speeds of up to 10 Gbps across its U.S. footprint. The five-year deployment plan is scheduled to begin in 2017.

Interest to use TV white space increases in 2017 in USA.  The major factors driving the growth of the market include providing low-cost broadband to remote and non-line-of-sight regions. Rural Internet access market is expected to grow at a significant rate between 2016 and 2022. According to MarketsandMarkets, the global TV white space market was valued at $1.2 million in 2015 and is expected to reach approximately $53.1 million by 2022, at a CAGR of 74.30% during the forecast period.

The rapid growth of the internet and cloud computing has resulted in bandwidth requirements for data center network. This is in turn expected to increase the demand for optical interconnects in the next-generation data center networks.

Open Ethernet networking platforms will make a noticeable impact in 2017. The availability of full featured, high performance and cost effective open switching platforms combined with open network operating systems such as Cumulus Networks, Microsoft SoNIC, and OpenSwitch will finally see significant volume uptake in 2017.

Network becomes more and more software controlled in 2017.NFV and SDN Will Mature as Automated Networks will become Production systems. Over the next five years, nearly 60 percent of hyperscale facilities are expected to deploy SDN and/or NFV solutions. IoT will force SDN adoption into Campus Networks.

SDN implementations are increasingly taking a platform approach with plug and play support for any VNF, topology, and analytics that are instrumented and automated. Some companies are discovering the security benefits of SDN – virtual segmentation and automation. The importance of specific SDN protocols (OpenFlow, OVSDB, NetConf, etc.) will diminish as many universes of SDN/NFV will solidify into standard models. More vendors are opening up their SDN platforms to third-party VNFs. In Linux based systems eBPF and XDP are delivering flexibility, scale, security, and performance for a broad set of functions beyond networking without bypassing the kernel.

For year 2016 it was predicted that gigabit ethernet sales start to decline as the needle moving away from 1 Gigabit Ethernet towards faster standards (2.5 or 5.0 or 10Gbps; Nbase-T is basically underclocked 10Gbase-T running at 2.5 or 5.0Gbps instead of 10Gbps). I have not yet seen the result from this prediction, but that does not stop from making new ones. So I expect that 10GbE sales will peak in 2017 and start a steady decline after 2017 as it is starts being pushed aside by 25, 50, and 100GbE in data center applications. 25Gbit/s Ethernet is available now from all of the major server vendors. 25 can start to become the new 10 as it offers 2.5x the throughput and only a modest price premium over 10Gbit/s.

100G and 400G Ethernet will still have some implementation challenges in 2017. Data-center customers are demanding a steep downward trajectory in the cost of 100G pluggable transceivers, but existing 100G module multi-source agreements (MSAs) such as PSM4 and CWDM4 have limited capacity for cost reduction due to the cost of the fiber (PSM4) and the large number of components (both PSM4 and CWDM4). It seems that dual-lambda PAM4 and existing 100G Ethernet (100GE) solutions such as PSM4 and CWDM4 will not be able to achieve the overall cost reductions demanded by data-center customers.  At OFC 2016, AppliedMicro showcased the world’s first 100G PAM4 single-wavelength solution for 100G and 400G Ethernet. We might be able to see see 400GE in the second half of 2017 or the early part of 2018.

As the shift to the cloud is accelerating in 2017, the traffic routed through cloud-based data centers is expected to quadruple in the next four years according to the results of the sixth annual Global Cloud Index published by Cisco. Public cloud is growing faster than private cloud. An estimated 68 percent of cloud workloads will be deployed in public cloud data centers by 2020, up from 49 percent in 2015. According to Cisco, hyperscale data centers will account for 47 percent of global server fleet and support 53 percent of all data center traffic by 2020.

The modular data center market has experienced a high growth and adoption rate in the last few years, and is anticipated to experience more of this trend in years to come. Those data centers are typically built using standard 20 ft. container module or standard 40 ft. container module. Modular data center market is anticipated to grow at a CAGR of 24.1% during period 2016 – 2025, to account for US$ 22.41 billion in 2025Also in 2017 the first cracks will start to appear in Intel’s vaunted CPU dominance.

The future of network neutrality is unsure in 2017 as the Senate failed to reconfirm Democratic pro-net neutrality FCC Commissioner Jessica Rosenworcel, portending new Trump era leadership and agenda Net neutrality faces extinction under Trump. Also one of Trump’s advisers on FCC, Mark Jamison, argued last month that the agency should only regulate radio spectrum licenses, scale back all other functions. When Chairman Tom Wheeler, the current head of the FCC, steps down, Republicans will hold a majority.

 

1,115 Comments

  1. Tomi Engdahl says:

    Molex debuts flame-retardant Polymicro FR optical fibers
    http://www.cablinginstall.com/articles/2017/09/molex-polymicro-fr.html?cmpid=enl_cim_cim_data_center_newsletter_2017-09-21

    Molex has released a flame retardant version of its Polymicro optical fiber. The Molex Polymicro Flame-Retardant (FR) Optical Fibers meet the UL 94 V-0 flammability standard for component materials used in telecommunications and industrial applications where a low flammability rating is essential. “Having a fiber with a low flammability buffer coating offers a distinct advantage in applications where enhanced flammability protection is of paramount importance,” notes Jim Clarkin, Polymicro general manager, Molex. “Strong, durable Molex Polymicro FR Fibers meet industry requirements for flammability protection.”

    Per the company, the “Polymicro FR Optical Fibers are available in telecommunications grade singlemode or 50µm and 62.5µm graded index construction with a 125µm glass OD/250µm buffer OD.

    Designed for superior dimensional control and tight tolerances, Polymicro FR Optical Fibers have an operational temperature range of -40 to +100°C. The UL 94 V-0 flammability rating indicates characteristics recommended in applications requiring increased protection from flame propagation and combustion. The buffer utilized in Polymicro FR Optical Fiber is mechanically strippable similar to an acrylate buffer and imparts exceptional fiber strength.

    http://www.molex.com/polymicro/opticalfibers.html

    Reply
  2. Tomi Engdahl says:

    TIA’s TR-42 Committee bustling with activity
    http://www.cablinginstall.com/articles/print/volume-25/issue-9/features/standards/tia-s-tr-42-committee-bustling-with-activity.html?cmpid=enl_cim_cim_data_center_newsletter_2017-09-21

    The Telecommunications Industry Association’s (TIA) TR-42 Telecommunications Cabling Systems Committee meets three times per year, developing and revising standard documents for telecommunications cabling systems that are used predominantly in North America.

    TR-42 approved the ANSI/TIA-942-B Telecommunications Infrastructure Standard for Data Centers.

    The completed ANSI/TIA-942-B standard includes the following, among many other, changes from the “A” revision.

    It incorporates Addendum 1 to the 942-A standard, which addresses data center fabrics, as an Annex.
    It adds 16- and 32-fiber MPO-style array connectors as an additional connector type for termination of more than two fibers. The 16- and 32-fiber connectors were standardized when ANSI/TIA-604-18 was published in 2016.
    It adds Category 8 as an allowed type of balanced twisted-pair cable, and changes the recommendation for Category 6A balanced twisted-pair cable to Category 6A or higher.
    It adds Om5 as an allowed fiber type. The TIA-492-AAAE standard specifies Om5 fiber, which is designed to support short-wavelength division multiplexing.

    TR-42 also approved for publication the ANSI/TIA-1179-A Healthcare Facility Telecommunications Infrastructure Standard. The “A” in the standard’s title indicates it is the first revision of the original 1179 standard, which was published in 2010.

    A key element of the 1179-A standard, which was retained from the original document, is a table of recommended work-area outlet densities for different areas in a healthcare facility.

    Montstream listed the following significant changes from the original 1179 standard document.

    Balanced twisted-pair backbone cabling is Category 6A minimum
    Balanced twisted-pair horizontal cabling is Category 6A minimum
    Om4 is the recommended minimum for multimode optical-fiber cabling
    A minimum of two fibers are required for optical-fiber backbone cabling
    Array connectors are permitted for optical-fiber cabling in the work area
    MUTOAs (multi-user telecommunications outlet assemblies) and consolidation points may be used as additional network elements
    Requirements were added for: telecommunications pathways and spaces (additional requirements to those in ANSI/TIA-569-D); bonding and grounding; firestopping; broadband coaxial cabling; multi-tenant building spaces
    Recommendations were added for cabling for wireless access points and distributed antenna systems

    Multiple projects for single-twisted-pair cabling

    In June TR-42 initiated four standards projects related to single-twisted-pair cabling systems.

    One of those projects is the effort that ultimately will result in the publication of ANSI/TIA-568.5, specifying single-twisted-pair cabling and components. The standard will provide specifications for cables, connectors, cords, links, and channels using one-pair connectivity in non-industrial networks, according to a working statement of the standard’s scope. The standard will be geared toward what are called “MICE1” environments. MICE is an acronym for mechanical, ingress, climatic, and electromagnetic. The TIA-1005 standard series includes MICE tables, which numerically characterize the network environment’s severity for each of the four conditions. The higher the number, the more severe the environment. In practical application, a MICE1 environment is a commercial office space.

    Another effort that TR-42 initiated in June is an addendum (Addendum 2) to the ANSI/TIA-568.0-D standard. The addendum will add single balanced twisted-pair use cases, topology, and architecture to the standard. “The standard will include installation requirements and additional guidelines for transitioning from 4-pair to 1-pair cabling,” says an early-stage scope of the standard.

    Additionally, this document will provide single-twisted-pair cabling guidelines for emerging Internet of Things and machine-to-machine (m2M) applications that will require higher density, reduced size, and greater flexibility than can be provided by existing technology.

    Reply
  3. Tomi Engdahl says:

    Microsoft and Facebook’s finished undersea cable is faster than Google’s alternative
    http://mashable.com/2017/09/22/microsoft-facebook-marea-cable/?utm_cid=mash-com-fb-socmed-link#OBVwQFEWXkqM

    More than 17,000 feet below the ocean’s surface, there now lies the “most technologically advanced subsea cable,” providing up to 160 terabits (Tbps) of data per second — beating Google’s alternative, now poorly named, “Faster.” The cable is the handiwork of Facebook, Microsoft, and Spanish telecommunication company Telxius.

    Construction on the cable, which stretches 4,000 miles from Virginia Beach, Virginia to Bilbao, Spain, began in August 2016. Microsoft announced its completion on Thursday, but it won’t be operational until early 2018.

    Reply
  4. Tomi Engdahl says:

    Deborah Bach / Microsoft:
    Microsoft, Facebook, and Telxius complete 160Tbps Marea cable, the highest-capacity subsea cable across the Atlantic — People and organizations rely on global networks every day to provide access to internet and cloud technology. Those systems enable tasks both simple and complex …

    Microsoft, Facebook and Telxius complete the highest-capacity subsea cable to cross the Atlantic
    https://news.microsoft.com/features/microsoft-facebook-telxius-complete-highest-capacity-subsea-cable-cross-atlantic/

    People and organizations rely on global networks every day to provide access to internet and cloud technology. Those systems enable tasks both simple and complex, from uploading photos and searching webpages to conducting banking transactions and managing air-travel logistics. Most people are aware of their daily dependency on the internet, but few understand the critical role played by the subsea networks spanning the planet in providing that connectivity.

    Reply
  5. Tomi Engdahl says:

    IETF doc seeks reliable vSwitch benchmark
    Once switches become just another function to spawn, you’ll need to know how they’ll fare
    https://www.theregister.co.uk/2017/09/25/opennfv_vsperf/

    If you fancy wrapping your mind around the complexities that make virtual switches (vSwitches) hard to benchmark, an IETF informational RFC is worth a read.

    Put together by Maryam Tahhan and Billy O’Mahony of Intel (note the usual disclaimer that RFCs are the work of individuals not employers) and Al Morton of AT&T, RFC 8204 is designed to help test labs get repeatable, reliable vSwitch benchmarks.

    It’s part of the ongoing work of the OpenNFV VSperf (vSwitch Performance) project group, which wants to avoid the kind of misunderstanding that happens when people set up tests that suit themselves.

    As the RFC notes: “A key configuration aspect for vSwitches is the number of parallel CPU cores required to achieve comparable performance with a given physical device or whether some limit of scale will be reached before the vSwitch can achieve the comparable performance level.”

    And that’s a knotty problem, because a vSwitch has available to it all of the knobs, buttons, levers and tweaks a sysadmin can apply to the server that hosts it.

    Moreover, as they note, benchmarks have to be repeatable. Since the switches will likely run as VMs on commodity servers, there’s a bunch of configuration parameters tests should capture that nobody bothers with when they’re testing how fast a 40 Gbps Ethernet can pass packets port-to-port.

    The current kitchen-sink list includes (for hardware) BIOS data, power management, CPU microcode level, the number of cores enabled and how many of those were used in a test, memory type and size, DIMM configurations, various network interface card details, and PCI configuration.

    There’s an even-longer list of software details: think “everything from the bootloader up”.

    Reply
  6. Tomi Engdahl says:

    Cut Off From The World, Puerto Ricans Search For A Ghost Of A Signal
    http://www.npr.org/sections/thetwo-way/2017/09/24/553373996/cut-off-from-the-world-puerto-ricans-search-for-a-ghost-of-a-signal

    On the side of a busy expressway in northern Puerto Rico, dozens of cars stand in a line, parked at careless angles off the shoulder. Drivers hold their phones out of car windows; couples walk along the grass raising their arm skyward.

    This is not a picturesque stretch of road. It’s about 90 degrees out, and the sun is beating down relentlessly.

    Hurricane Maria destroyed large swaths of Puerto Rico’s infrastructure when it hit the island last week. Among other things, it wiped out cell service. The island was totally incommunicado — but signals are starting to trickle back in some places, like this stretch along Expressway 22 on the island’s northern side.

    “We parked and realized that we could make phone calls,

    “All we need is one contact,” Marco Dorta said. “They’ll tell everyone else.”

    And even here, the search is often futile, he notes. Still, people hold their phones in the air, out of car windows, or stand on little hills to try to reach higher — “doing things that I know [are] hopeless,” Nieves said. If you don’t have a signal, you don’t have a signal.

    “But, you know, faith is the last thing you lose,” he said.

    Reply
  7. Tomi Engdahl says:

    IEEE 802.3 Beyond 10km Optical PHYs Study Group will gain standard consensus to implement 50-, 200- and 400 Gb/s Ethernet beyond 10km
    http://www.cablinginstall.com/articles/2017/09/ieee-10km-phy.html?cmpid=enl_cim_cim_data_center_newsletter_2017-09-25

    IEEE and the IEEE Standards Association (IEEE-SA) have announced the official launch of the IEEE 802.3 Beyond 10km Optical PHYs Study Group. Per a press release, “Chartered by the IEEE 802 LAN/MAN Standards Committee (LMSC) Executive Committee, and launched under the auspices of the IEEE 802.3 Ethernet Working Group, the new study group aims to develop a Project Authorization Request (PAR) and Criteria for Standards Development (CSD) responses for optical solutions targeting physical distances beyond 10km for 50 Gb/s, 200 Gb/s, and 400 Gb/s Ethernet.”

    “The launch of the IEEE 802.3 Beyond 10km Optical PHYs Study Group represents a first step towards standardization that will meet the needs of network providers, such as wireless operators across the globe where bandwidth demands are projected to vary significantly from region to region,”

    Reply
  8. Tomi Engdahl says:

    Keysight BERT now supports 64 Gbaud NRZ
    https://www.edn.com/electronics-products/other/4458810/Keysight-BERT-now-supports-64-Gbaud-NRZ

    With IEEE 802.3bs and OIF CEI-56G standards and their supporting ICs now in place, 400 Gbit/s (400G) serial links are becoming reality. They’re based on 8 lanes of 50 Gbits/s each. Despite all the talk in recent years that PAM-4 modulation will replace NRZ at data rates of 50 Gbits/s and higher per lane, NRZ keeps hanging on. It’s just the way we thought the inexpensive FR4 PCB material would go away long ago, but it won’t. That’s because the signal processing of transmitters and receivers—plus better PCB design practices—give these technologies ever longer lives. Recognizing that NRZ still has teeth, Keysight Technologies as upgraded its M8040A bit-error-ratio tester (BERT) to support 64 Gbit/s NRZ signaling.

    Reply
  9. Tomi Engdahl says:

    DP-QPSK 100 Gb/400 Gb Coherent Optical Receiver Lab Buddy
    https://www.discoverysemi.com/Product_Pages/DSCR413.php?utm_source=lightwave_newsletter&utm_medium=email&utm_campaign=R413&utm_content=r413_enews

    The DSC-R413 is an O/E instrument designed to convert DP-QPSK optical data to differential electrical signals. The R413 offers several user-adjustable characteristics such as RF gain, bandwidth, and mode of operation (AGC or manual gain control) and is ideally suited for a variety of applications. Additionally, the DC photocurrents of the balanced receivers are available for monitoring.

    Reply
  10. Tomi Engdahl says:

    Microsoft and Facebook just laid a 160-terabits-per-second cable 4,100 miles across the Atlantic
    Enough bandwidth to stream 71 million HD videos at the same time
    https://www.theverge.com/2017/9/25/16359966/microsoft-facebook-transatlantic-cable-160-terabits-a-second

    Microsoft, Facebook, and the telecoms infrastructure company Telxius have announced the completion of the highest capacity subsea cable to ever cross the Atlantic Ocean. The cable is capable of transmitting 160 terabits of data per second, the equivalent of streaming 71 million HD videos at the same time, and 16 million times faster than an average home internet connection, Microsoft claims. The cable will be operational by early 2018.

    Reply
  11. Tomi Engdahl says:

    Ramp Begins for 10 Gbps as Broadband Access Network Backbone
    http://www.broadbandtechreport.com/whitepapers/2017/09/ramp-begins-for-10-gbps-as-broadband-access-network-backbone.html?cmpid=enl_btr_weekly_2017-09-26

    The 10-Gbps downstream capacity DOCSIS 3.1 supports holds the key to gigabit now and higher-rate services later. Several operators have embarked on DOCSIS 3.1 rollouts for just this reason. Yet DOCSIS 3.1 is not the only transmission technology that offers 10 Gbps capabilities. Operators therefore have a choice of how to meet their gigabit-and-beyond requirements as they upgrade their infrastructures with 10 Gbps (and greater) capacity in mind.

    How DOCSIS 3.1 will support symmetrical 10 Gbps
    How the IEEE is expanding the capabilities of EPON

    Reply
  12. Tomi Engdahl says:

    New Trends in Telecom Brought by Big Data Analytics
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1332354&

    Rapid adoption of Big Data analytics in the telecom industry has resulted in the identification of a few important shortfalls.

    For one, initial big data solutions were inflexible and slow. The realization of these shortcomings is fueling future trends in the Telecom Industry. Let’s take a look at these trends and how they may shape the industry for years to come:

    Constant Access to Ad-Hoc Queries
    Ad-hoc, on-demand analytics is likely one of the more powerful trends shaping the industry, but it’s a bit technical so it’s important to cover the basics.

    What is an Ad Hoc Query?
    In the past, analytics would focus on pre-determined queries and parameters. These queries would be run against a pre-calculated OLAP cube. The cube or stored procedures are usually quite specific, and are calculated to support a handful of queries. This reduces flexibility quite significantly when a question being asked wasn’t planned ahead of time.

    So, an ad-hoc (“for this purpose”) query is really any query which the database designer or DBA hadn’t anticipated.

    How Does This Relate to Telecom?
    Until recently, ad-hoc querying was considered too complex, too slow, and requiring too much data volume to be practical. Now however, newer, more sophisticated databases and technologies are making the practice possible.

    Ad-hoc analytics can help telecom achieve a 360 degree view of customers. They can help discover and optimize new sales and marketing initiatives, enhance customer service, and improve operational efficiency.

    Telecom companies can create queries looking for things like:

    What products are losing customers?
    Which price changes impacted defection?
    Are there any major changes in customer service metrics?
    What factors outside the company are impacting customer churn?
    Are there any trends in social media comments?
    Are there any trends in customer satisfaction surveys?
    How do call detail records relate to customer satisfaction?
    What percentage of customers who lodged an email complaint cancelled theirs service?

    Additionally, this is part of the broader switch away from batch processing and Hadoop. Why are telecom organizations moving away from Hadoop and batch processing, you ask?

    Reply
  13. Tomi Engdahl says:

    Big Switch, HPE, in ‘You complete me. No, you complete me’ tryst
    https://www.theregister.co.uk/2017/08/09/big_switch_hpe_alliance/

    Dell’s still got more network OS options, but as anyone who remembers NetWare vs. NT can tell you, a NOS war is fun

    Big Switch Networks and HPE have buddied up to push other’s stuff and the concept of open networking.

    The deal will see HPE resell Big Switch’s Big Mon and Cloud Fabric on its Altoline switches.

    HPE’s kit isn’t very good at the kind of monitoring Big Switch’s products can enable, so HPE feels the deal gives it the chance to sell to clients who like to keep on top of network traffic for security or granular network grooming. Such tasks are often the province of carriers and like any sensible vendor both HPE and Big Switch want some of them as customers. They’re therefore hopeful the tie-up will let both do business with more such organisations.

    Big Switch’s business development veep Susheel Chitre told The Register both companies’ customers have also been asking if Big Switch was certified for Altoline.

    Reply
  14. Tomi Engdahl says:

    AT&T Seeks Supreme Court Review on Net Neutrality Rule
    https://www.bloomberg.com/news/articles/2017-09-28/broadband-providers-to-seek-high-court-review-on-net-neutrality

    AT&T Inc. and other broadband providers asked the U.S. Supreme Court to overturn the Obama-era “net neutrality” rule barring internet service providers from slowing or blocking rivals’ content.

    The appeals, filed Thursday, will put new pressure on a rule enacted in 2015 when the Federal Communications Commission was under Democratic control. Filing a separate appeal from AT&T were the United States Telecom Association, a trade group, and broadband service provider CenturyLink Inc.

    Reply
  15. Tomi Engdahl says:

    Vibration Monitoring: Distributed fiber-optic vibration sensor uses low-cost interferometry and OTDR
    http://www.laserfocusworld.com/articles/print/volume-53/issue-09/world-news/vibration-monitoring-distributed-fiber-optic-vibration-sensor-uses-low-cost-interferometry-and-otdr.html?cmpid=enl_lfw_newsletter_2017-09-26

    Fiber-optic sensors that measure distributed vibration along a length have myriad applications, including oil and gas pipeline monitoring, border security, and structural integrity monitoring for bridges, aircraft, and other critical infrastructures.

    Because vibration induces a phase change in the optical signal reflected back from an optical fiber, interferometers are commonly used in low-cost distributed optical fiber vibration sensor (DOFVS) measurement setups; unfortunately, these interferometer-only setups cannot distinguish between simultaneous multi-point vibrations, limiting their utility.

    And although phase-sensitive optical time-domain reflectometry (F-OTDR) can distinguish multi-point vibrations with high sensitivity (and can be further improved by coherent detection, modulated pulses, and optical amplification), the ultranarrow-linewidth, powerful lasers required for these systems are not only expensive, but subject to instability including Rayleigh scattering and frequency drift.

    To address the drawbacks of each of these methods, researchers at Huawei Technologies (Shenzhen, China), Southeast University (Nanjing, China), and Northeastern University (Shenyang, China) have developed a real-time DOFVS that uses the principles of Sagnac interference and coherent OTDR technology, requiring only a low-cost broadband superluminescent diode (SLD) to achieve 16 km range with 110 m spatial resolution and ± 0.5 m location error with longer averaging.

    In the setup, the 35 mW, 1550 nm center wavelength and 40 nm bandwidth SLD source reduces the noise caused by the coherent interaction of Rayleigh backscattering signals from different scattering points in the fiber.

    Light reflected back from the optical fiber because of the presence of vibration, strain, or temperature events can pass back through both arms of the interferometer and into a balanced detector, which also sees a portion of the SLD input as a reference signal.

    A series of mathematical equations sees vibration as a phase change in the interference signals of the interferometer and both single-point and multi-point vibrations can be calculated from the reflectance data.

    Reply
  16. Tomi Engdahl says:

    Windstream to demo SDN-enabled multi-vendor service orchestration at MEF17
    http://www.lightwaveonline.com/articles/2017/09/windstream-to-demo-sdn-enabled-multi-vendor-service-orchestration-at-mef17.html?cmpid=enl_lightwave_lightwave_datacom_2017-09-26

    Windstream (NASDAQ: WIN) will partner with Ciena, Coriant, and Infinera to participate in the MEF17 Proof of Concept (PoC) Showcase. Windstream and its collaborators will offer a demonstration of multi-vendor service orchestration enabled via software-defined networking (SDN). The PoC demonstration will show off the capabilities of Windstream’s Software Defined Network Orchestrated Waves (SDNow) service (see “Windstream Wholesale offers SDN orchestrated wavelength services via SDNow”).

    The MEF PoC Showcase demonstration will see on-demand, end-to-end activation of SDNow 10-Gbps optical transport services on Windstream’s 150,000-mile fiber-optic network. The multi-domain service orchestration will leverage SDN controller software provided by Windstream’s vendor partners. The demonstration also will spotlight intent-based provisioning, data abstraction, and use of APIs for automated software-driven service design, provisioning, and delivery. The demonstration aligns with the MEF Lifecycle Service Orchestration (LSO) standards and open-source projects (see, for example, “MEF increases focus on SD-WAN managed services”).

    Windstream currently provides SDNow to five third-party carrier-neutral data centers, one each in Chicago, Dallas, Ashburn, Miami, and Atlanta. The service provider reveals it has plans to add services and locations this year and next. Windstream explains its use of an SDN-enabled, DevOps-style approach to automation development in SDNow enables the abstraction of service delivery complexity as well as a simplified view of the multi-vendor optical layer.

    Reply
  17. Tomi Engdahl says:

    China took the quantum net to use – 2000 kilometers

    China has introduced a new quantum network between Beijing and Shanghai. Its length is 2000 kilometers. The line now includes 32 relay stations and in Jinan, the system is able to encrypt 4000 data blocks per second.

    The system utilizes individual photon transmitters and quantum gate devices both in transceiver transmitters and receivers, a fiber-based data network can be utilized in quantum communications.

    At present, only two modes of quantum communication are used, which results in very slow data transfer. The center of the network is Jinan City, where this network technology has been utilized on the city scale for a few years.

    With science, Chinese scientists have at their best reached 300-400 km intervals, and they have been active, for example, for developers of the MDI QKD method, which is more tolerant of interference.

    Source: https://www.uusiteknologia.fi/2017/09/29/kiina-otti-kvanttiverkon-kayttoon-pituus-2000-kilometria/

    More:
    China to launch world’s first quantum communication network
    https://phys.org/news/2017-08-china-world-quantum-network.html

    Reply
  18. Tomi Engdahl says:

    Steve Wozniak: Net neutrality rollback ‘will end the internet as we know it’
    http://www.siliconbeat.com/2017/09/29/steve-wozniak-net-neutrality-rollback-will-end-internet-know/

    Apple co-founder Steve Wozniak penned an op-ed on Friday with a former Federal Communications Commission chairman, urging the current FCC to stop its proposed rollback of Obama-era net neutrality regulations.

    In the op-ed published by USA Today, Wozniak and Michael Copps, who led the FCC from 2001 to 2011, argued the rollback will threaten freedom for internet users and may corrode democracy.

    “The path forward is clear. The FCC must abandon its ill-conceived plan to end net neutrality,” wrote Wozniak and Copps. “Instead of creating fast lanes for the few, it should be moving all of us to the fast lane by encouraging competition in local broadband connectivity and pushing companies to deliver higher speeds at more affordable prices. It’s the right thing for us as consumers and as citizens.”

    Reply
  19. Tomi Engdahl says:

    Using Ansible to modernize telcos’ infrastructure through automation.
    http://verticalindustriesblog.redhat.com/using-ansible-to-modernize-telcos-infrastructure-through-automation/?sc_cid=7016000000127ECAAY

    As telecommunications companies continue to modernize their networks and IT systems, they have to navigate the challenges that come with legacy systems, including legacy virtualized network functions (VNFs) that rely on local filesystem storage, single server implementations on which all the services run, and manually-intensive installations and upgrades. What telcos need are tools like Ansible, a general-purpose, open-source automation engine that automates software provisioning, configuration management, and application deployment.

    Earlier this month, at AnsibleFest 2017 in San Francisco, Red Hat added new products and updated existing ones that expand its automation portfolio. It added the ability to automate network management and updated Ansible Tower, which enables the automation of IT functions at enterprise scale, so it can now be used to automate the management of Arista, Cisco and Juniper networking software as well as instances of Open vSwitch and VyOS. Red Hat acquired the company behind Ansible in 2015, and today the technology is one of the world’s most popular open source IT automation technologies

    Reply
  20. Tomi Engdahl says:

    Coaxial Cable vs. Triaxial Cable (with a note about Quadrax & Twinax)
    https://www.picwire.com/technical/tech-papers/coaxial-vs-triaxial-cables

    There are many variations within all the “-axial” designs, but a basic understanding of their family names may help in making sensible application decisions

    Coax (coaxial cable) consists of two conductors which share the same axis. To do this, and to separate them electrically, at least one must be cylindrical and larger in diameter than the other. Coaxial cables are inherently unbalanced, which may be bad news concerning immunity to EMI

    Triax resembles coax in that all the conductors share the same axis, but there are three of them. At least two of these must be cylindrical and insulated from one another and the third conductor. So it is a three-conductor “co-”axial cable.
    Triax can be used in many coax applications, but offers an additional, separate shield

    Twinax also has two twisted conductors, but they are surrounded by a single (or double, but not isolated) shield.

    Quadrax is a four-conductor cable. The two separate shields share the same axis, but the two remaining conductors are a twisted pair.
    Its greatest usefulness is below 50 MHz.

    Reply
  21. Tomi Engdahl says:

    The steel mill and the accounting office require different things from the network

    In office environments, enterprise and internal network solutions can today be based on purely wireless technology:

    Combining fast 4G business solution solutions to mobile users’ APN solution brings office users to the workplace’s home network with light implementation.
    A WLAN network based on powerful and strong authentication for a long time has been sufficient for the internal network solution in the office environment.
    As public cloud services become more common, connecting the office network directly to Microsoft’s and Amazon’s cloud-based cloud services, for example, via cloud-over, completes the network environment for office environments.

    Again, the network technology solution for heavy industry needs to be re-examined in more ways:

    The resistance of physical network devices to heat, dust, moisture and, for example, vibration, sets their own standards for the devices used.
    Because of latency, WAN solutions for the mobile network are not yet possible in industrial sites today. Instead, the MPLS-SD hybrid is a great alternative.
    Absolute immobility imposes demands on the coverage and duplication of a wireless network.
    The whole technology chain from the terminal to the backbone network has to be taken into consideration.
    In addition to the introduction of a computer network, network management will be at least as important as businesses are increasingly relying on network-based processes in their business.

    In industry, the development of new operating models has always been an integral part of everyday business. Continuous development of manufacturing processes and raising the degree of automation have always been a commonplace in the manufacturing industry. The development of online technologies is proceeding at the same pace.

    Source: http://www.tivi.fi/Kumppaniblogit/dna/terastehdas-ja-tilitoimisto-vaativat-verkolta-eri-asioita-6678349

    Reply
  22. Tomi Engdahl says:

    New UL safety certification program addresses ICT power cables
    http://www.cablinginstall.com/articles/2017/09/ul-safety-ict.html?cmpid=enl_cim_cim_data_center_newsletter_2017-10-02

    To benefit end-product and cable manufacturers, brand owners, retailers, and end users, Underwriters Laboratories (UL) has launched a new safety certification program for Information and Communication Technology (ICT) power cables. These cables are used to power or charge IT and communication devices such as laptops, tablet computers, smart phones, power banks and more.

    While the power capabilities of ICT cables are growing to meet the demand for faster charging and to power higher wattage devices, so are the potential risks of overheating and fire due to the use of poorly constructed cables. This program addresses the potential safety hazards of cable assemblies that provide power or charging for connected equipment in a circuit that does not exceed 60 V dc, 8.0 A and 100 W.

    Cables tested and certified to UL 9990 are covered under UL’s surveillance program.

    “The ICT power cable certification program provides increased transparency for vendors and end users by making it simpler to identify those cables that can carry the appropriate current with reduced likelihood of overheating and the risk of fire.”

    Reply
  23. Tomi Engdahl says:

    Optical Standard Boosts Data to Instruments, SDR, 5G
    https://www.eetimes.com/document.asp?doc_id=1332385&

    The AXIe consortium today announced the Optical Data Interface (ODI), a technical standard for connecting test instruments that have applications in 5G, mil/aero, and software-defined radio (SDR). As communications data rates increase and the amount of data collected by test systems balloons, copper-based communications have a hard time keeping up.

    The ODI standard defines a communications link that uses a common connector and is based on established communications protocols from the Interlaken Protocol standard. Although the AXIe consortium oversees ODI, the communications link isn’t limited to test-and-measurement applications.

    Keysight Technologies, Guzik, Samtec, Conduant, Xilinx, and Intel have announced that they explicitly endorse ODI, with Keysight, Guzik, Samtec, and Conduant all announcing that they will develop ODI products. Samtec will offer 24-fiber ODI cables in standard lengths. Intel will support ODI in its upcoming Stratix 10 FPGA products. Both Intel and Xilinx have long supported the Interlaken Protocol in their FPGAs. Guzik recently announced AXIe digitizer and DSP modules that already have ODI ports. The photo below shows Guzik’s ADP7104 digitizer and two DP7000 signal processors in a Keysight AXIe chassis. Each module has four ODI ports for data (right side) and one ODI port for control (left side)

    Just how much data can pass through an ODI link? The physical layer, defined in the ODI Standard, defines 12 lanes that can run at either 12.5 Gbps or 14.1 Gbps. That results in a data rate of 160 Gbps or 20 Gbytes/s per cable, which can be multiplexed to provide up to 80 Gbytes/s

    Link length can reach up to 100 m. ODI cables use the common Multi-push-on connector (MTP), which Samtec and others supply.

    Reply
  24. Tomi Engdahl says:

    Nokia has launched new analytics services for operators that make it easier to utilize e-commerce data to improve service. The services are based on the AVA platform utilizing machine learning launched last year.

    According to Nokia, Analytics Services will not increase network maintenance costs. Still, added intelligence can increase the rate of solution for emerging network problems by 20-40 percent for the first time.

    Likewise, with new technologies in machine learning techniques, it is possible to reduce the number of calls online by as much as 35 percent. This is based on the algorithms developed by Nokia Bell Labs, which utilize the quality of user connectivity to the following measurements. A similar approach is used to design network capacity.

    Source: http://www.etn.fi/index.php/13-news/6931-nokia-tekoaely-vaehentaeae-puheluiden-katkeamista

    Reply
  25. Tomi Engdahl says:

    US Telco Fined $3 Million in Domain Renewal Blunder
    https://www.bleepingcomputer.com/news/technology/us-telco-fined-3-million-in-domain-renewal-blunder/

    Sorenson Communications, a Utah-based telecommunications provider, received a whopping $3 million fine from the Federal Communications Commission (FCC) on Friday for failing to renew a crucial domain name used by a part of the local 911 emergency service.

    The affected service was the Video Relay System (VRS), a video calling service that telecommunication firms must provide to deaf people and others people with vocal disabilities so they can make video calls to 911 services and use sign language to notify operators of an emergency or crime.

    According to the FCC, on June 6, Sorenson failed to notice that the domain name on which the VRS 911 service ran had expired, leading to the entire system collapsing shortly after.

    Utah residents with disabilities were unable to reach 911 operators for almost three days, the FCC discovered. Sorensen noticed its blunder and renewed the domain three days later, on June 8.

    FCC found the outage was preventable

    “The Commission’s investigation found the outage was preventable,” the FCC wrote in a settlement it reached with Sorensen last week.

    The settlement sum is massive, but of it, only $252,000 is an actual fine, going to the FCC. The rest of the fee — $2.7 million — is a restitution Sorensen must give back to the FCC’s TRSF division.

    The FCC uses the TRSF (Telecommunications Relay Services Fund) to subsidize VRS systems across the country. The $2.7 million Sorensen has to give back represents the money the telco received from the US government to run the 911 VRS system and to rent its dedicated bandwidth for the three days the system went down.

    Sorensen is by no stretch of the imagination the first company to forget to renew a domain name.

    Reply
  26. Tomi Engdahl says:

    North Korea Gets Second Route to Internet Via Russia Link
    https://www.bloomberg.com/news/articles/2017-10-02/north-korea-gets-second-route-to-internet-this-time-via-russia

    North Korea now has two ways to get on the internet, thanks to a new connection from Russia, according to cybersecurity outfit FireEye Inc.

    Russian telecommunications company TransTeleCom opened a new link for users in North Korea

    Until now, state-owned China United Network Communications Ltd. was the country’s sole connection.

    The news comes at a time when China, North Korea’s chief financial backer, faces increasing pressure from the U.S. to force Kim Jong Un to halt his nuclear weapons program.

    FireEye confirmed the availability of the new connection by checking routing tables

    Reply
  27. Tomi Engdahl says:

    More Than 80 Percent of All Net Neutrality Comments Were Sent By Bots, Researchers Say
    https://science.slashdot.org/story/17/10/03/2146200/more-than-80-percent-of-all-net-neutrality-comments-were-sent-by-bots-researchers-say

    The Trump administration and its embattled FCC commissioner are on a mission to roll back the pro-net neutrality rules approved during the Obama years, despite the fact that most Americans support those safeguards. But there is a large number of entities that do not: telecom companies, their lobbyists, and hordes of bots. Of all the more than 22 million comments submitted to the FCC website and through the agency’s API found that only 3,863,929 comments were “unique,” according to a new analysis by Gravwell,

    Discovering truth through lies on the internet – FCC comments analyzed
    https://www.gravwell.io/blog/discovering-truth-through-lies-on-the-internet-fcc-comments-analyzed

    For this post, the Gravwell analytics team ingested all 22 million+ comments submitted to the FCC over the net neutrality issue. Using Gravwell we were able to rapidly conduct a variety of analysis against the data to pull out some pretty interesting findings. We scraped the entirety of the FCC comments over the course of a night and ingested them into Gravwell afterward. It took about an hour of poking around to get a handle on what the data was and the following research was conducted over about a 12 hour period. So we went from zero knowledge to interesting insights in half a day. We’re kinda nerding out about it.

    A very small minority of comments are unique — only 17.4% of the 22,152,276 total. The highest occurrence of a single comment was over 1 million.
    Most comments were submitted in bulk and many come in batches with obviously incorrect information — over 1,000,000 comments in July claimed to have a pornhub.com email address
    Bot herders can be observed launching the bots — there are submissions from people living in the state of “{STATE}” that happen minutes before a large number of comment submissions

    Reply
  28. Tomi Engdahl says:

    More Than 80% Of All Net Neutrality Comments Were Sent By Bots, Researchers Say
    https://motherboard.vice.com/en_us/article/43a5kg/80-percent-net-neutrality-comments-bots-astroturfing

    95 percent of all organic comments favored net neutrality, according to the analysis.

    The Trump administration and its embattled FCC commissioner are on a mission to roll back the pro-net neutrality rules approved during the Obama years, despite the fact that most Americans support those safeguards. But there is a large number of entities that do not: telecom companies, their lobbyists, and hordes of bots.

    Of all the more than 22 million comments submitted to the FCC website and through the agency’s API found that only 3,863,929 comments were “unique,” according to a new analysis by Gravwell, a data analytics company. The rest? A bunch of copy-pasted comments, most of them likely by automated astroturfing bots, almost all of them—curiously—against net neutrality.

    “Using our (admittedly) simple classification, over 95 percent of the organic comments are in favor of Title II regulation,” Corey Thuen, the founder of Gravwell, told Motherboard in an email.

    Reply
  29. Tomi Engdahl says:

    Vogel Telecom builds fiber backbone network across Brazil
    http://www.lightwaveonline.com/articles/2017/10/vogel-telecom-builds-backbone-network-across-brazil.html?cmpid=enl_lightwave_lightwave_datacom_2017-10-03

    Coriant says it has supplied integrated Optical Transport Network (OTN) switching and optical transport systems to Vogel Telecom of Brazil. The carrier’s carrier has used the optical networking hardware to create a nationwide fiber backbone network that will support the delivery of OTN, Ethernet, MPLS-TP, and SDH services. The fiber-optic network construction includes metro to long haul network infrastructure.

    The project, performed in collaboration with strategic partner Logicalis, is an extension of a contract originally signed at the end of 2016, Coriant says.

    Vogel Telecom’s national fiber project includes more than 21,000 km of fiber and connects more than 600 cities in 13 Brazilian states and the Federal District of Brasília.

    Reply
  30. Tomi Engdahl says:

    Ericsson, Telstra, and Ciena trial continuous data encryption over 21,940 km
    http://www.lightwaveonline.com/articles/2017/10/ericsson-telstra-and-ciena-trial-continuous-data-encryption-over-21-940-km.html?cmpid=enl_lightwave_lightwave_datacom_2017-10-03

    Ericsson (NASDAQ: ERIC) says it has collaborated with optical systems partner Ciena and Australian network operator Telstra to trial continuous data encryption over 21,940 km of submarine cable across a pair of undersea cable networks. The trial used Ciena’s low latency wire-speed encryption technology (see “Ciena WaveLogic Encryption offers optical layer encryption up to 200 Gbps”) to encrypt data while maintaining speed and reliability in transit between Los Angeles and Melbourne at 100 Gbps.

    According to Ericsson, the new encryption technology will appeal to organizations with high security requirements, including finance, healthcare, defense, government sectors, and data center operators.

    “A series of advanced demonstrations such as these are necessary before any product is released commercially,” said Emilio Romeo, head of Ericsson Australia and New Zealand. “In partnership with Telstra and Ciena, Ericsson provides end-to-end systems integration expertise to deliver the secure solution, with our teams continuing to hit faster encryption milestones. In January 2015, we had success at 200 Gbps between Melbourne and Sydney, then with 10 Gbps speeds over the greater distance from Melbourne to Los Angeles in January this year. Now we have achieved 100 Gbps. Ericsson will continue to support Telstra’s path toward commercialization of this enhanced security capability.”

    Reply
  31. Tomi Engdahl says:

    Nokia leads Broadband Access Abstraction project for Broadband Forum
    http://www.lightwaveonline.com/articles/2017/10/nokia-leads-broadband-access-abstraction-project-for-broadband-forum.html?cmpid=enl_lightwave_lightwave_datacom_2017-10-03

    Nokia says it is leading the Broadband Access Abstraction (BAA) project, an initiative to drive the adoption of software-defined access networks. The effort, part of the Broadband Forum’s (BBF’s) Open Broadband (OB) program, will attempt to reach its goals through the contribution of open source software aligned with industry standards from participants such as Nokia.

    Nokia says the project’s purpose is to define a software reference implementation for an open BAA layer. Development of the BAA layer will reduce dependency on proprietary hardware and software through the provision of standardized interfaces and decoupling implementation from the underlying hardware.

    Reply
  32. Tomi Engdahl says:

    MXC
    http://www.rosenberger-osi.com/en/main/products-services/fo-components/fo-connectivity-systems/mxcr.html

    The MXC®, a brand of US Conec Ltd., is a connector with fibers and lenses, amongst others for 400GBE-SR16 applications. The connector facilitates operations of 32 fibers (2 rows with 16 fibers each). In total, up to 64 fibers are possible in one connector (4 rows with 16 fibers each). The so called Card-Edge- and Mid-/Backplane-versions of the MXC® are used in high-performance IT products.

    Rosenberger OSI has been trained as one of the first European manufacturing partners for the MXC® connector and has successfully completed the demanding certification process.

    Reply
  33. Tomi Engdahl says:

    BT unveils SD-WAN service for improved infrastructure control
    http://www.lightwaveonline.com/articles/2017/10/bt-unveils-sd-wan-service-for-improved-infrastructure-control.html?cmpid=enl_lightwave_lightwave_datacom_2017-10-03

    BT has unveiled BT Agile Connect, a software-defined wide area network (SD-WAN) service that BT says will provide customers with improved control and understanding of their infrastructure and traffic flows. Created for large organizations, Agile Connect will offer customers a fast, secure means of new site set-up, along with low costs and decreased network complexity, according to BT. The new SD-WAN service leverages the security and resiliency of BT’s global network infrastructure, as well as BT technologies and those of Nokia’s Nuage Networks.

    To determine the most effective traffic route across a customer’s wide area network, Agile Connect uses software-defined networking (SDN) on national and global scales. This routing can make introducing new access services to customer networks easier or make using what were previously backup connections more effective, BT asserts. The routing also ensures the preferred route for traffic from high-priority business applications.

    According to BT, the Agile Connect is designed with the following features:

    · Changes are implemented centrally, making local technical support unnecessary and enabling customers to prioritize applications or manage access services use via an interactive portal. Customers can also obtain improved visibility of application performance.

    · A BT pre-built controller infrastructure is included and hosted on the internet, and on BT’s multi-protocol label switching (MPLS) network.

    · BT pre-built MPLS internet gateways are used by the service to offer cloud-based connectivity between internet-connected and MPLS-connected sites.

    · BT’s investments in both controller and gateway infrastructure security are leveraged by the service.

    BT offers SDN-enabled WAN service
    http://www.lightwaveonline.com/articles/2016/01/bt-offers-sdn-enabled-wan-service.html

    Reply
  34. Tomi Engdahl says:

    Power/Performance Bits: Oct. 3
    Slowing down photonics; longer-lasting batteries; energy-harvesting roads.
    https://semiengineering.com/powerperformance-bits-oct-3/

    Slowing down photonics
    Researchers at the University of Sydney developed a chip capable of optical data into sound waves, slowing data transfer enough to process the information.

    While speed is a major bonus with photonic systems, it’s not as advantageous when processing data. By turning optical signals into acoustic, data can be briefly stored and managed inside the chip for processing, retrieval and further transmission as light waves.

    “The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Birgit Stiller, research fellow at the University of Sydney. “Our system is not limited to a narrow bandwidth. So unlike previous systems this allows us to store and retrieve information at multiple wavelengths simultaneously, vastly increasing the efficiency of the device.”

    World-first microchip: ‘storing lightning inside thunder’
    http://sydney.edu.au/news-opinion/news/2017/09/19/world-first-microchip—storing-lightning-inside-thunder-.html

    Reply
  35. Tomi Engdahl says:

    AOL and Yahoo plan to call themselves by a new name after the Verizon deal closes: ‘Oath’
    http://nordic.businessinsider.com/aol-and-yahoo-will-become-oath-after-merger-closes-2017-4

    When Verizon merges Yahoo with AOL after its acquisition of Yahoo closes, the newly created division will get a new name.

    And that new name is “Oath,” sources tell Business Insider.

    In a deal that was first announced last July, Verizon will acquire Yahoo’s core internet business for about $4.83 billion in cash.

    Yahoo will then be merged with Verizon’s AOL unit under Marni Walden – the executive vice president and president of product innovation and new businesses – with Verizon scooping up Yahoo’s search, mail, content, and ad-tech businesses.

    In January, Yahoo announced in an SEC filing that following the close of the merger, the parts of Yahoo that Verizon is not buying (which includes Yahoo’s 15% of Chinese retail giant Alibaba and a part of Yahoo Japan, a joint venture with Softbank) will continue on under the name to Altaba.

    Reply
  36. Tomi Engdahl says:

    SCTE Cable-Tec Expo 2017 Must See Products: Coriant
    http://www.broadbandtechreport.com/articles/2017/10/scte-cable-tec-expo-2017-must-see-products-coriant.html?cmpid=enl_btr__2017-08-30&[email protected]

    Coriant will highlight the modular Groove G30 Network Disaggregation Platform, which helps operators meet the growth in cloud traffic with industry-leading low power consumption and high density. The Groove G30 supports up to 3.2 Tbps of capacity in 1RU.

    Reply
  37. Tomi Engdahl says:

    Prepping for the Fiber Future
    http://www.broadbandtechreport.com/articles/2017/09/prepping-for-the-fiber-future.html?cmpid=enl_btr_weekly_2017-10-03

    Market analysts predict a heightened investment in fiber infrastructure to meet the need for greater internet speed and capacity. These investments, experts say, will dramatically stimulate economic productivity in the United States.

    Cable is at the forefront of this investment, as over the years cable operators have pushed fiber deeper into their networks to better support broadband connectivity to subscribers. Techniques such as node splitting and node segmentation, as well as the capabilities provided by the coax medium, have enabled the industry to cover an estimated 85% of U.S. homes with internet speeds of 25 Mbps. The broad-scale penetration of their distribution fiber network has provided cable operators with an opportunity to expand their portfolios into adjacent services such as the backhaul of macro cell sites for wireless operators and enterprise services.

    We have seen with the Cable Wi-Fi Alliance, some cable operators are providing internet connectivity through Wi-Fi hotspots deployed throughout their and other participating partner networks. Today, we see the same cable operators expanding their service offerings through agreements with major wireless providers as a virtual mobile network operator. These agreements have provided cable operators the opportunity to expand service offerings into the wireless space with a cellular and Wi-Fi experience outside of the home.

    Here are some examples of fiber-deep expansion challenges the cable industry faces with the next “upgrade,” and how to address them:

    As cable providers push fiber deeper into the access network with new architectures, the number of fiber termination points in the network may expand 10-12X. In some cases, node sizes can be reduced to as few as 64 passings. A “short payback period” is critical to the financial and competitive success of an expanded fiber-deep network. Operators will need to evaluate new deployment techniques to speed deployment.
    As providers push fiber closer to subscribers, they will expand further into the civil infrastructure, requiring municipal permitting that could impede the pace of a project. For instance, a jurisdiction may object to the size and placement of a street-side vault. One remedy is to make smart use of existing vaults or pathways: Consider smaller closure technology that eliminates splicing beyond the vault itself.
    Expansion of fiber deployments in the network will be most effective if, while designing, you consider adjacent services connected to the network. A multi-service network design is critical for operators to deploy as network upgrades move forward. Competitive pressures many times require a rapid deployment of FTTH-based services to single-family units, multiple dwelling units or a master planned community. Wireless densification and related small cell backhaul will rely on comprehensive and dense fiber networks that are different in some cases from current node + 0 architectures plans.

    Smart infrastructure design and deployment will enable the cable network of the future while also considering cost constrictions.

    Reply
  38. Tomi Engdahl says:

    NIST Readies to Tackle Internet’s Global BGP Vulnerabilities
    http://www.securityweek.com/nist-readies-tackle-internets-global-bgp-vulnerabilities

    NIST has published an update on its work on the new Secure Internet Domain Routing (SIDR) standards designed to provide the internet the security that is currently lacking from the Border Gateway Protocol (BGP).

    BGP was designed in 1989 as a short-term fix for the earlier Exterior Gateway Protocol that could no longer handle the rapidly increasing size of the internet, and was in imminent danger of meltdown. The problem is that BGP was designed without any security, despite it being fundamental to the operation of the internet.

    BGP controls the route that data takes from source to destination. It does this by keeping tabs on the availability of local stepping stones along that route. The availability of those stepping stones is maintained in regularly updated routing tables held locally. The problem is that there is no security applied to those tables — in effect, the entire map of the internet is built on trust; and trust is in short supply in today’s internet. Whole swathes of traffic can be hijacked.

    “BGP forms the technical glue holding the internet together,” explains NIST in Tuesday’s post; “but historically, its lack of security mechanisms makes it an easy target for hacking.”

    The trust model underpinning BGP is easily abused, and has frequently been abused. Generally speaking, most abuse is thought to have be accidental — but there have been enough suspicious incidents to demonstrate that the theoretic concern over BGP’s security is not unfounded.

    “As a result,” warns NIST in a separate publication (SIDR, Part 1: Route Hijacks– PDF)

    New Network Security Standards Will Protect Internet’s Routing
    https://www.nist.gov/news-events/news/2017/10/new-network-security-standards-will-protect-internets-routing

    Reply
  39. Tomi Engdahl says:

    EDITORIAL GUIDE
    SDN/NFV and Optical Networks
    http://www.lightwaveonline.com/content/lw/en/editorial-guides/2017/09/sdn-nfv-and-optical-networks.whitepaperpdf.render.pdf

    Building the Foundation for Cognitive Networking
    This article describes the journey to cognitive networking by explaining its key
    building blocks, such as software defined capacity (SDC), multi-layer software-defined
    networking (SDN) control, and transport technology breakthroughs that make the
    creation and the deployment of cognitive networks a fast-approaching reality

    What Is Cognitive Networking?
    Cognitive networking is the ultimate goal for the intelligent transport layer that
    underpins all cloud-based digital communications. By definition, a cognitive
    network is multi-layer, self-aware, self-organizing, and self-optimizing, and can
    take predictive and/or prescriptive action based on what it has gleaned from its
    collected data and experience. Realistically, no network can completely plan or
    run itself; however, cognitive networking will dramatically reduce the number of
    manual tasks required across a multi-layer network. This goal can be achieved
    by leveraging advanced software, streaming telemetry, big data with machine
    learning, and analytics to autonomously conduct network operations to meet the
    demand for connectivity, maximize network resources, and increase reliability.

    There are multiple important elements in a cognitive network, such as:

    Advanced analytics
    designed to parse streams of machine data to monitor
    network health and raise awareness of any anomalies

    Machine learning
    software tools that leverage advanced analytics to
    understand and identify trends in operations

    Autonomous
    hardware and software capable of executing various tasks and
    conducting required maintenance

    Predictive intelligence
    tools capable of identifying potential problems before
    they happen

    Prescriptive
    software tools designed to proactively recommend new solutions
    for maximizing capacity, enhancing reliability, and optimizing assets

    Cognitive networking is the result of seamless and highly dynamic interaction
    between software and hardware assets across network layers and brings optical
    networking to a new level of scalability, flexibility, and automation.

    Evolve the network architecture.
    A well-defined architecture dictates how
    networks are planned, operated, and evolved. Today, when content is king and
    must be accessible anywhere, anytime, and on any device with the highest level
    of quality, it is clear that the 1980s-era seven-layer Open Systems Interconnection
    (OSI) model has reached a tipping point. It needs to support the transformation
    in networks (e.g., network functions virtualization [NFV], SDN, etc.) and the new
    service delivery model based on cloud applications, service virtualization, etc.

    This new model consolidates and simplifies cloud service delivery and networking
    into two layers, wherein all the OSI networking layers (Layer 3 and below) are
    represented by Layer T, while all the application layers (Layer 4 and above) are
    grouped under Layer C. Layer T sets the guidelines and principles for the transport
    of data streams, whether between end users and data centers or among multiple
    data centers with bursty and often unpredictable traffic patterns.

    Layer C contains all the applications, functions, and services that run in the
    cloud, including consumer and business applications, SDN-based service creation
    and orchestration tools, software frameworks and applications for big data and
    machine learning, virtualized network functions (VNFs), and many others

    Leverage software defined capacity (SDC).
    A key steppingstone toward cognitive
    networking is to break away from the current methods of optical capacity planning,
    engineering, and hardware-based deployment that require numerous truck rolls,
    extensive manual labor, and human interaction at multiple points in the network.
    The road to cognitive networking starts with allowing intelligent software tools to
    dynamically add, modify, move, and retire optical capacity based on the real-time
    requirements of upper-layer applications (Layer C)

    Cloud Automation Enhances Optical Data Center Interconnect

    THE PHENOMENAL GROWTH
    of hyperscale internet content providers
    (ICPs) continues to drive innovation and change in network
    architectures, systems, and components.

    ICP automation can be characterized by three principles:
    1. Make everything open and programmable.
    2. Automate every task.
    3. Collect every data point and apply big data analytics.
    Because ICPs have large software teams that can develop custom, higher layer
    automation applications, their requirements for network equipment suppliers
    focus on maximizing network programmability and visibility.

    Increasing automation is also a goal for traditional communication service
    providers (CSPs), and capabilities driven by ICPs also apply to CSP networks.

    However, most CSPs do not have the software development capabilities typical
    of large ICPs, so many CSPs need their suppliers to provide automation software
    aligned to their network architecture and business model

    Everything Open and Programmable
    ICPs have long preferred to control and manage devices using software tools and
    systems developed in-house and tailored to their needs. Initially, ICPs developed
    automation techniques for their networks using available tools such as simple
    network management protocol (SNMP), Syslog and Terminal Access Controller
    Access-Control System Plus (TACACS+) protocols and, where necessary, command
    line interface (CLI) scripting.

    To simplify automation and improve reliability and scalability, ICPs led the push
    for modern, open application programming interfaces (APIs) on networking
    equipment. A primary example is the combination of NETCONF and YANG.
    NETCONF enables communication of network configuration and operations
    data and YANG provides a framework for description of that data.

    YANG models are currently being standardized through
    various industry efforts, such as the OpenConfig working group.

    Topology Discovery.
    Once optical DCI devices are connected and configured,
    network operators need to be able to validate that they have been connected
    correctly and isolate faults quickly and accurately. Both tasks require an accurate
    view of the network topology, i.e., how every device is connected to other devices

    Automating Encryption.
    Leading compact DCI systems also have begun to
    incorporate built-in hardware support for line-rate encryption, enabling all
    data to be encrypted as it travels between data centers.

    Collect Everything, Apply Big Data Analytics
    Automation is equally critical to ongoing network management. In today’s
    networks, performance and fault management and troubleshooting typically rely
    on highly skilled network engineers working with limited data, which requires
    a substantial amount of hands-on activity. Good network management systems
    (NMSs) make the process simpler and reduce the cost and time required for fault
    isolation and recovery. But ICPs nonetheless see opportunities to improve on the
    current state of the art by applying their automation skills.

    Collect Everything.
    With the falling cost of compute power and storage, there are
    few remaining barriers to collecting every bit of potentially relevant data about
    networks.
    Looking forward, ICPs will expect all network elements, including
    optical DCI systems, to support continuous transmission of data to cloud-based
    data collection agents, a process known as streaming telemetry.

    Apply Big Data Analytics.
    ICPs are working to automate the initial steps in
    trouble isolation and repair using smart software, data analytics, and machine
    learning.

    Once streaming telemetry is widely deployed and massive amounts of historical
    data are available, big data analytics techniques can be applied to identify trends
    or opportunities for optimization.

    Multi-platform Test Instruments for Successful SDN Migration

    OPERATORS ARE DRAWN
    to software-defined networks (SDNs)
    because they simplify network management. As operators migrate
    from traditional environments to SDN, they must be aware of
    potential service disruptions, lower client quality of service (QoS),
    and other issues. Testing, while always important, takes on an even greater
    role during this transitionary stage, particularly at the data-plane level where
    operator revenue is generated

    While the tests – QoS, traffic generation, etc. – haven’t changed, the technologies
    certainly have, and operators will be required to test a variety of client signals
    over the data plane. Circuit-based traffic common in legacy networks is now
    being encapsulated into next-generation transport technologies such as Optical
    Transport Network (OTN), Multiprotocol Label Switching (MPLS), and Generalized
    MPLS (GMPLS)

    OTN has increasingly become the preferred method for high-speed networks for
    a number of reasons.

    OTN uses Optical Data Unit (ODU) as digital wrapper to many technologies, such
    as Ethernet, SONET, and Fibre Channel

    Appeal of SDN

    As described by the Open Networking Foundation (ONF), the SDN architecture
    is dynamic, manageable, cost-effective, and adaptable

    Proponents of SDN point to five key benefits:

    1. Programmability:
    Control of the network can be programmed directly
    because it is decoupled from forwarding functions.

    2. Responsive:
    Administrators can dynamically adjust network-wide traffic flow
    to meet changing needs.

    3. Centrally managed:
    Software-based SDN controllers centralize network
    intelligence to create a universal network appearance to applications and
    policy engines.

    4. Dynamic Configuration:
    Network managers can configure, manage, secure,
    and optimize network resources very quickly via dynamic, automated SDN
    programs. Because the programs are open standards-based they can write
    themselves.

    5. Vendor-agnostic:
    When implemented through open standards, network
    design and operation are simplified because instructions are provided by SDN
    controllers rather than multiple, vendor-specific devices and protocols.

    The reason test software integrated into the SDN is not currently optimal is that
    there is no reference when tests are conducted.

    Instrument flexibility is also critical, enabling operators to save time and money.
    Test sets need to support legacy and emerging transport technologies, as well as
    rates from DS1 to 100 Gbps, so they can conduct measurements anywhere in the
    network, including inside the metro, access, and core.

    Virtualized tools for monitoring network performance, reliability, and dynamic
    traffic management that communicate through the control plane are significant
    reasons to migrate to SDN architecture. However, the need for independent
    testing and troubleshooting from reference is required in the data plane.

    Not any separate test approach can be used to monitor networks during this
    migration to SDN networks. These environments require test instruments
    that support forward error correction (FEC) performance tests using Poisson
    distribution random errors. Adopted by ITU-T O.182, these tests are important
    because the FEC section is one of the most vital areas of the network frame.

    Reproducible, accurate FEC error correction tests are performed by generating
    truly random signal errors that can stress OTN FEC.

    Reply
  40. Tomi Engdahl says:

    Data Center Interconnect Strategies
    http://www.lightwaveonline.com/content/lw/en/editorial-guides/2017/08/data-center-interconnect-strategies.whitepaperpdf.render.pdf

    The articles in
    this Editorial Guide describe some of
    the challenges faced in data center
    interconnect and how network
    managers can overcome them.

    Security Strategies for Data Center Interconnect

    DATA SECURITY AND
    privacy concerns continue to intensify along
    with the risks and costs of data breaches, leading many network
    operators to seek new security strategies and tools. One tool
    attracting particular interest among cloud service providers and
    data center operators is in-flight data encryption for data center interconnect
    (DCI) links. Advances in encryption technologies, systems, and solutions are now
    making it possible for all data traversing DCI links to be encrypted simply and
    cost-effectively, and with virtually no impact on DCI scalability and performance.

    Even fiber-optic networks, sometimes assumed (wrongly) to be inherently secure,
    are vulnerable to fiber tapping, which enables data to be captured and copied
    without alteration as it is transmitted over a fiber.

    Demand for in-flight data encryption was initially strongest among selected
    enterprise verticals, including financial services and government, but it is now
    growing across a broad range of customers in every market. Of particular note:
    Hyperscale cloud service providers are increasingly enabling encryption across
    their massive DCI networks to meet customer expectations.

    Of course, in-flight data encryption is no more of a security panacea than any
    other tool. It must be viewed as part of an overall security strategy that includes
    encryption of data in use and data at rest,

    In-flight Encryption Options
    In-flight encryption options are available at multiple layers of the protocol stack,
    but not all these approaches are created equal.

    Internet Protocol security (IPsec) is widely used and accepted
    for internet-based virtual private networks (VPNs), but it suffers from high
    overhead and other disadvantages when applied to DCI. The better choices for DCI
    are encryption at lower layers, either Layer 1 encryption or media access control
    security (MACsec) at Layer 2.

    MACsec is standards-based and widely accepted, like IPsec, but has inherently
    lower overhead. Hardware-based implementations of MACsec, delivering line-
    rate encryption with very low latency, are available from multiple component
    vendors. Because MACsec is well-established and aligned to Ethernet-based DCI
    requirements, it is the preferred choice for some hyperscale cloud providers.

    Layer 1 (L1) encryption is an emerging choice that matches the low-latency and
    high-performance characteristics of MACsec and provides incremental benefits.
    Encryption protocol overhead measured at the Ethernet layer is eliminated
    completely with L1 encryption because it is part of the lower layer frame.

    Considering all the options, MACsec and L1 encryption are both good choices for
    DCI, capable of providing similar benefits and performance.

    A complete approach for in-flight encryption includes several components, each
    of which should conform to strong standards and best practices to ensure a high
    security:

    Before two DCI endpoint devices can exchange keys and begin encrypting data,
    each device must be authenticated. Best practices used in other IT applications,
    including X.509 certificates and public key infrastructure (PKI), can be applied
    equally well for DCI devices.

    The data encryption should conform to the strongest standard, AES-256-GCM
    (Advanced Encryption Standard cipher with 256-bit keys and Galois/Counter
    Mode of operation), and the data encryption keys should be different from the
    keys used for device authentication

    Authenticated DCI devices must create a shared key for secure data exchange
    using a key exchange protocol. The best practice for efficient exchange of AES-
    256 keys uses the elliptic curve Diffie-Hellman (ECDH) protocol with a 521-bit
    elliptical curve key size.

    To further strengthen security, data encryption keys should be changed
    regularly, ideally as frequently as minutes or even seconds, without any
    interruption to the data flow.

    Access to the DCI devices themselves should be secured to ensure that the
    encryption hardware and keys cannot be compromised by unauthorized users.

    Last but not least, while encryption can seem inherently complicated, a good
    in-flight security scheme should be designed to be as simple as possible for
    initial implementation and ongoing management.

    Optical DCI Architecture: Point-to-Point versus ROADM

    Virtually all data centers use DWDM networks to meet their DCI needs. The
    ability to carry vast amounts of traffic at an economical price makes DWDM
    technology the obvious choice. Two competing optical DWDM architectures exist
    for providing DCI bandwidth:

    Point-to-point (P2P)
    ROADM.

    With a P2P design, each site has a single DWDM multiplexer and a fiber pair for
    each site to which it is connected. With a ROADM design, there is a single two-
    degree ROADM at each site and a single fiber ring to which each site is connected.
    The ROADM is capable of dropping traffic into that data center or passing traffic
    directly through to the next node on the ring.

    The DWDM multiplexers used in the P2P architecture are passive,
    whereas the ROADM equipment is active; thus the P2P uses less power and the
    P2P scenarios have lower utility costs. As the number of sites increases, the
    number of rack unit devices used at each site grows in the case of the P2P design.
    However, the number of ROADMs at each site stays the same.

    ROADM architecture holds an escalating cost advantage over P2P as the number
    of sites increases beyond three locations. This cost differential is primarily
    attributed to dark fiber costs and becomes more significant as architectures
    become more “meshy.” For example, in a six-site network design, the cost of a P2P
    architecture is approximately twice that of ROADM.

    One argument against ROADM architecture is that there are fewer wavelengths,
    and thus less bandwidth between each site. ROADM sites will have 88, 96, or
    possibly 128 wavelengths of 100G/200G available, depending on the technology
    used. Thus, a four-node scenario will have an average of about 42 (128/3)
    wavelengths between each node. That’s a lot of bandwidth and, in most cases,
    will satisfy wavelength and bandwidth needs for years to come.

    Additionally, performing add-and-change operations in a P2P design requires
    fibers to be added or moved manually.

    Validating Client Cable Connections throughout the Data Center

    Different Fiber-Optic Cables for Different Uses
    DCO teams are responsible for the installation and verification of client cabling
    and transported services, from cloud applications to streaming content. Electrical
    Ethernet cabling such as active and passive DACs, Cat5e, Cat6, and Cat6a can be
    found in most plants for delivery up to 25 Gigabit Ethernet (GbE) rates for patch
    panels and horizontal or vertical cable managers.
    Fiber-optic cables are common for most rates greater than 10GbE and can vary
    in types, connectors, and lengths depending on the application. MPO trunk
    cables can support up to 72 fiber strands, housed in one MPO connector, but are
    expensive in nature and used for short distances. Single mode fibers are used for
    longer distances (10-40 ,m) and higher data rates (up to 100 Gbps), but are cost-
    sensitive due to the optics required for data transmission. Multimode duplex fiber
    provides high data rates (up to 10 Gbps per fiber) over short to medium distances
    (up to 300 m).

    For these reasons, data centers incorporate a combination of electrical and fiber-
    optic data connections, over both short and long distances, and use a variety of
    connectors depending on the type of network element, performance, and speed
    required to support the client service.

    Fiber Optic Client Cable Testing
    There are three main areas of fiber-optic client cable testing:
    1. Fiber-optic connector inspection
    2. Optical transceiver analysis
    3. Fiber-optic cable measurement

    Connector inspection: How clean is the fiber connector?
    Dirty or damaged fiber-optic connectors can greatly affect network performance.
    A digital video inspection probe can reduce these issues by verifying the
    condition and cleanliness of connector end faces during the installation phase
    without the need for analog microscopes.
    Fail status should be displayed as defined by IEC 61300-3-3

    Optic transceiver analysis: Are my pluggable optical modules functioning
    correctly?
    As data rates increase to 25, 40, and 100 Gbps, pluggable optics and
    DACs become more complex and potential additional points of failure. Pluggable
    optical modules (in SFP, SFP+, SFP28, CFP, CFP2, CFP4, QSFP+, QSFP28, CXP, or
    other form factor) support management data input/output (MDIO ) and I2C
    information of the optical transceiver. I2C and MDIO analysis and testing verifies
    the performance of the pluggable optical moduel and monitors alarms and errors
    to isolate issues from the DAC and fiber-optic cables and connectors

    Fiber-optic cable measurement: What is my total loss? How much loss at each
    termination?
    Scratched or dirty connectors at fiber-cable connections, such as
    patch panels and adapters, can be detected as fault locations from the excessive
    optical reflections they create. An optical time-domain reflectometer (OTDR)
    displays such measurement results as a trace that shows the cable length, macro-
    bends, losses, and size of reflections, as well as an easy-to-view summary of
    the analysis results.

    Here are the most common benchmark tests:

    RFC 2544
    – This IETF standards body test typically is used for packet-based
    network equipment or new lines. This traditional benchmark test is used to test
    the Layer 2 “pipe” throughput, latency, and frame loss after the physical Layer 1
    fiber or electrical cable has been tested.

    Y.156 4
    – This ITU standards body test is relatively new to the industry; it often
    is used to replace RFC 2544 in the operator end-to-end testing scenario but not
    for individual switch/router equipment.

    RFC 6349
    – This IETF standard body test was recently developed. This
    benchmark test addresses the Layer 4 TCP communication within the packet-
    based transport. This test is focused on the client-to-client communication
    once the Physical, Transport, and Network layers are tested.

    Reply
  41. Tomi Engdahl says:

    Net neutrality debate ‘controlled by bots’
    http://www.bbc.com/news/technology-41497342

    More than 80% of the comments submitted to a US regulator on the future of net neutrality came from bots, according to researchers.

    Data analytics company Gravwell said only 17.4% of the comments were unique.

    Most of the 22 million comments submitted to the Federal Communications Commission over the summer had been against net neutrality, it suggested.

    One expert said the findings posed a risk to future polls.

    More than one million comments in July had claimed to have a pornhub.com email address, which had raised suspicions, it said.

    People who submitted comments directly to the FCC website had been “overwhelming in support of net neutrality regulations”, lead researcher Corey Thuen said.

    In contrast, those that were submitted via the FCC-approved platform for bulk submissions had been anti-net neutrality.

    Mr Thuen said: “Seeing a clear difference of opinion between bulk submitted comments versus those that came in via the FCC comment page, we’re forced to conclude that either the nature of submission method has some direct correlation with political opinion, or someone is telling lies on the internet.”

    Bot writers

    Prof Phil Howard, of the Oxford Internet Institute, said of the findings: “This is an extreme example of what happens when a government initiates comment sections, and it does open up questions about what other public consultations have been influenced in this way.

    “It is going to be increasingly rare that public organisations solicit anonymous opinions in this way.”

    It is difficult though to work out who could be behind the bots.

    Three separate analyses in May suggested that more than 400.000 comments with similar wording had been detected.

    Reply
  42. Tomi Engdahl says:

    RF over Glass at the Premises
    http://www.broadbandtechreport.com/webcasts/2017/10/rf-over-glass-at-the-premises.html?cmpid=enl_btr_weekly_2017-10-05

    With all the talk about EPON and GPON and next-generation PON, it might be easy to forget that RF over Glass remains a viable option for operators who want to reach subscribers with fiber. This webinar will look at how RFoG compares to other all-fiber options and the use cases where it shines brightest.

    Reply
  43. Tomi Engdahl says:

    Google Fiber Skipping TV in New Deployments
    http://www.broadbandtechreport.com/articles/2017/10/google-fiber-skipping-tv-in-new-deployments.html?cmpid=enl_btr_weekly_2017-10-05

    Google Fiber (NASDAQ:GOOG) says it won’t be offering TV services in its upcoming fiber-to-the-home (FTTH) deployments in Louisville and San Antonio, focusing instead on high-speed Internet only.

    The company cites the trend toward online streaming video and over-the-top (OTT) services and says that more and more of its customers are choosing Internet-only options in its existing markets. Though Google Fiber didn’t say so, ever-increasing content costs probably also figured into the decision.

    Reply
  44. Tomi Engdahl says:

    RFoG (Radio Frequency over Glass)
    https://www.multicominc.com/solutions/technologies/rfog/

    RFoG is a type of passive optical networking that proposes to transport RF signals that are now transported over copper (principally over hybrid fiber and coax cable), over a Passive Optical Network.

    In the forward direction RFoG is either a stand alone Point to Multi-Point system or an optical overlay for existing Passive Optical Network (PON) such as Gigabyte Passive Optical Network (GPON). Reverse RF support is provided by transporting the upstream or return path into on a separate return path or wavelength.

    RFoG offers backwards compatibility with existing RF modulation technology, as a result the existing equipment located at the headend and customer premise can still be utilized. RFoG offers a means to support RF technologies in locations where only fiber is available, or where copper is not permitted or feasible. This technology is targeted towards Cable TV operators and their existing HFC networks.

    Radio frequency over glass
    https://en.wikipedia.org/wiki/Radio_frequency_over_glass

    In telecommunications, radio frequency over glass (RFoG) is a deep-fiber network design in which the coax portion of the hybrid fiber coax (HFC) network is replaced by a single-fiber passive optical network (PON). Downstream and return-path transmission use different wavelengths to share the same fiber (typically 1550 nm downstream, and 1310 nm or 1590/1610 nm upstream). The return-path wavelength standard is expected to be 1610 nm, but early deployments have used 1590 nm. Using 1590/1610 nm for the return path allows the fiber infrastructure to support both RFoG and a standards-based PON simultaneously, operating with 1490 nm downstream and 1310 nm return-path wavelengths.

    RFoG delivers the same services as an RF/DOCSIS/HFC network, with the added benefit of improved noise performance and increased usable RF spectrum in both the downstream and return-path directions. Both RFoG and HFC systems can concurrently operate out of the same headend/hub, making RFoG a good solution for node-splitting and capacity increases on an existing network.

    RFoG allows service providers to continue to leverage traditional HFC equipment and back-office applications with the new FTTP deployments. Cable operators can continue to rely on the existing provisioning and billing systems, cable modem termination system (CMTS) platforms, headend equipment, set-top boxes, conditional access technology and cable modems while gaining benefits inherent with RFoG and FTTx.

    RFoG provides several benefits over traditional network architecture:

    More downstream spectrum; RFoG systems support 1 GHz and beyond, directly correlating to increased video and/or downstream data service support
    More upstream bandwidth; RFoG’s improved noise characteristics allow for the use of the full 5–42 MHz return-path spectrum. Additionally, higher-performance RFoG systems not only support DOCSIS 3.0 with bonding, but also enable 64 quadrature amplitude modulation (QAM) upstream transmission in a DOCSIS 3.0 bonded channel, dramatically increasing return-path bandwidth.
    Improved operational expenses; RFoG brings the benefits of a passive fiber topology. Removing active devices in the access network reduces overall power requirements, as well as ongoing maintenance costs that would normally be needed for active elements (such as nodes and amplifiers).

    Both cost savings and increased capacity for new services (revenue generating and/or competitive positioning) are driving the acceptance of RFoG as a cost-effective step on the path towards a 100-percent PON-based access network.

    As with an HFC architecture, video controllers and data-networking services are fed through a CMTS/edge router. These electrical signals are then converted to optical ones, and transported via a 1550 nm wavelength through a wavelength-division multiplexing (WDM) platform and a passive splitter to a fiber-optic micro-node located at the customer premises. If necessary, an optical amplifier can be used to boost the downstream optical signal to cover a greater distance.

    The Society of Cable and Telecommunications Engineers (SCTE) has approved SCTE 174 2010, the standards for RFoG

    Reply
  45. Tomi Engdahl says:

    What is RFoG | RF over Glass basics | RFoG Architecture
    http://www.rfwireless-world.com/Terminology/RFoG-RF-over-Glass-Architecture.html

    the architecture consists of optical Hub, ODN (Optical Distribution Network) and subscriber home premises. The main part in RFoG architecture is ONU (Optical Network Unit). The ONU is located between RF and optical domain. Depending upon AM or FM, RF modulation part will vary.

    Like RF, optical system uses one wavelength for transmission (1550 nm) and the other wavelength for reception (1310 nm /1610 nm). Like diplexer is used to separate/combine transmit and receive bands in the RF, optical system uses WDM (Wavelength Division MUX/DEMUX) for optical signals.

    RF over Glass (RFoG)
    CP85x4U/6U/7U/8U‐00
    RFoG MDU R‐ONU Family with 42/54,65/85,and 85/102MHz Options
    https://www.arris.com/globalassets/resources/data-sheets/1510788-reva_cp85x4-6-7-8u-00_mdu_r-onu.pdf

    Reply
  46. Tomi Engdahl says:

    RF over Glass (RFoG) Testing
    https://www.excentis.com/testing/example-projects/rf-over-glass-rfog-testing

    With Radio Frequency over Glass (RFoG) solutions cable operators can directly deploy fiber to the premise while leveraging their existing back-office HFC equipment and applications. All RF infrastructure stays in place, but now the fiber goes to the customer premises instead of getting terminated at the fiber node. This is especially useful for greenfield deployments.

    Reply
  47. Tomi Engdahl says:

    NXP Seeks ‘Edge’ vs. Intel, Cavium
    https://www.eetimes.com/document.asp?doc_id=1332402&

    TOKYO — As the lines begin to blur between cloud and edge computing, NXP Semiconductors is racing to offer the highest performance SoC of the company’s Layerscape family.

    The new chip, LX2160A, can offload heavy-duty computing done at data centers in the cloud, enabling the middle of the network — typically, service operators — to execute network virtualization and run high-performance network applications on network equipment such as base stations.

    Toby Foster, senior product manager for NXP, told us that his team developed the new high-performance chip with three goals in mind. They sought first to enable new types of virtualization in the network, second to achieve new heights of integration and performance at low power featuring next-generation I/Os, and third, to double the scale of virtual network functions and crypto, compared to NXP’s previous Layerscape SoC (LS2088A), while maintaining low power consumption.

    Specifically, the LX2160A features 16 high-performance ARM Cortex-A72 cores running at over 2 GHz at 20- to 30-watt. It supports both the 100 Gbit/s Ethernet and PCIe Gen4 interconnect standards.

    Why edge computing?
    The industry, including NXP, tends to view edge processing as the driver for the next phase of networking, computing and IoT infrastructure growth.
    By moving workloads from the cloud to the edge, operators will suffer less latency while gaining resiliency and bandwidth reliability, explained Foster.

    Bob Wheeler, principal analyst responsible for networking at the Linley Group, told us, “In some cases, such as content delivery networks, the transition from the cloud to the edge is already happening.” He predicted, “Mobile edge computing will primarily happen in conjunction with 5G rollouts starting in 2019.”

    Reply
  48. Tomi Engdahl says:

    The Importance Of Wi-Fi
    https://semiengineering.com/the-importance-of-wifi/

    A look at how this technology became such a critical part of our communications infrastructure

    Wi-Fi has had a huge impact on the modern world, and it will continue to do so. From home wireless networks to offices and public spaces, the ubiquity of high speed connectivity without reliance on cables has radically changed the way computing happens. It would not be much of an exaggeration to say that because of ready access to Wi-Fi, we are consequently able to lead better lives – using our laptops, tablets and portable electronics goods in a far more straightforward, simplistic manner with a high degree of mobility, no longer having to worry about a complex tangle of wires tying us down.

    Though it may be hard to believe, it is now two decades since the original 802.11 standard was ratified by the IEEE. This first in a series of blogs will look at the history of Wi-Fi to see how it has overcome numerous technical challenges and evolved into the ultra-fast, highly convenient wireless standard that we know today. We will then go on to discuss what it may look like tomorrow.

    Unlicensed Beginnings

    Wireless Ethernet
    Though the 802.11 wireless standard was released in 1997, it didn’t take off immediately. Slow speeds and expensive hardware hampered its mass market appeal for quite a while – but things were destined to change. 10 Mbit/s Ethernet was the networking standard of the day. The IEEE 802.11 working group knew that if they could equal that, they would have a worthy wireless competitor. In 1999, they succeeded, creating 802.11b.

    Soon after 802.11b was established, the IEEE working group also released 802.11a, an even faster standard. Rather than using the increasingly crowded 2.4 GHz band, it ran on the 5 GHz band and offered speeds up to a lofty 54 Mbits/s.

    Apple expected to have the cards at a $99 price point, but of course the volumes involved could potentially be huge. Lucent Technologies, which had acquired NCR by this stage, agreed.

    PC makers saw Apple computers beating them to the punch and wanted wireless networking as well. Soon, key PC hardware makers including Dell, Toshiba, HP and IBM were all offering Wi-Fi.

    Microsoft also got on the Wi-Fi bandwagon with Windows XP. Working with engineers from Lucent, Microsoft made Wi-Fi connectivity native to the operating system.

    Reply
  49. Tomi Engdahl says:

    High Efficiency Wireless: 802.11ax
    http://www.electronicdesign.com/white-paper/high-efficiency-wireless-80211ax?code=UM_NN7NIS1&utm_rid=CPG05000002750211&utm_campaign=13323&utm_medium=email&elq2=61704fb877624eeb8440281331668f21

    The upcoming IEEE 802.11ax High-Efficiency Wireless (HEW) standard promises to deliver four times greater data throughput per user. It relies on multiuser technologies to make better use of the available Wi-Fi channels and serve more devices in dense user environments. Explore this technology introduction white paper to learn about the new applications of 802.11ax, the key technical innovations to the standard, and its test and measurement challenges.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*