Telecom trends for 2015

In few years there’ll be close to 4bn smartphones on earth. Ericsson’s annual mobility report forecasts increasing mobile subscriptions and connections through 2020.(9.5B Smartphone Subs by 2020 and eight-fold traffic increase). Ericsson’s annual mobility report expects that by 2020 90% of the world’s population over six years old will have a phone.  It really talks about the connected world where everyone will have a connection one way or another.

What about the phone systems in use. Now majority of the world operates on GSM and HPSA (3G). Some countries are starting to have good 4G (LTE) coverage, but on average only 20% is covered by LTE. 4G/LTE small cells will grow at 2X the rate for 3G and surpass both 2G and 3G in 2016.

Ericsson expects that 85% of mobile subscriptions in the Asia Pacific, the Middle East, and Africa will be 3G or 4G by 2020. 75%-80% of North America and Western Europe are expected to be using LTE by 2020. China is by far the biggest smartphone market by current users in the world, and it is rapidly moving into high-speed 4G technology.

The sales of mobile broadband routers and mobile broadband “usb sticks” is expected to continue to drop. In year 2013 those devices were sold 87 million units, and in 2014 sales dropped again 24 per cent. Chinese Huawei is the market leader (45%), so it has most to loose on this.

Small cell backhaul market is expected to grow. ABI Research believes 2015 will now witness meaningful small cell deployments. Millimeter wave technology—thanks to its large bandwidth and NLOS capability—is the fastest growing technology. 4G/LTE small cell solutions will again drive most of the microwave, millimeter wave, and sub 6GHz backhaul growth in metropolitan, urban, and suburban areas. Sub 6GHz technology will capture the largest share of small cell backhaul “last mile” links.

Technology for full duplex operation at one radio frequency has been designed. The new practical circuit, known as a circulator, that lets a radio send and receive data simultaneously over the same frequency could supercharge wireless data transfer, has been designed. The new circuit design avoids magnets, and uses only conventional circuit components. The radio wave circulator utilized in wireless communications to double the bandwidth by enabling full-duplex operation, ie, devices can send and receive signals in the same frequency band simultaneously. Let’s wait to see if this technology turns to be practical.

Broadband connections are finally more popular than traditional wired telephone: In EU by the end of 2014, fixed broadband subscriptions will outnumber traditional circuit-switched fixed lines for the first time.

After six years in the dark, Europe’s telecoms providers see a light at the end of the tunnel. According to a new report commissioned by industry body ETNO, the sector should return to growth in 2016. The projected growth for 2016, however, is small – just 1 per cent.

With headwinds and tailwinds, how high will the cabling market fly? Cabling for enterprise local area networks (LANs) experienced growth of between 1 and 2 percent in 2013, while cabling for data centers grew 3.5 percent, according to BSRIA, for a total global growth of 2 percent. The structured cabling market is facing a turbulent time. Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that will use copper will decline.

Businesses will increasingly shift from buying IT products to purchasing infrastructure-as-a-service and software-as-a-service. Both trends will increase the need for processing and storage capacity in data centers. And we need also fast connections to those data centers. This will cause significant growth in WiFi traffic, which will  will mean more structured cabling used to wire access points. Convergence also will result in more cabling needed for Internet Protocol (IP) cameras, building management systems, access controls and other applications. This could mean decrease in the installing of special separate cabling for those applications.

The future of your data center network is a moving target, but one thing is certain: It will be faster. The four developments are in this field are: 40GBase-T, Category 8, 32G and 128G Fibre Channel, and 400GbE.

Ethernet will more and more move away from 10, 100, 1000 speed series as proposals for new speeds are increasingly pushing in. The move beyond gigabit Ethernet is gathering pace, with a cluster of vendors gathering around the IEEE standards effort to help bring 2.5 Gbps and 5 Gbps speeds to the ubiquitous Cat 5e cable. With the IEEE standardisation process under way, the MGBase-T alliance represents industry’s effort to accelerate 2.5 Gbps and 5 Gbps speeds to be taken into use for connections to fast WLAN access points. Intense attention is being paid to the development of 25 Gigabit Ethernet (25GbE) and next-generation Ethernet access networks. There is also development of 40GBase-T going on.

Cat 5e vs. Cat 6 vs. Cat 6A – which should you choose? Stop installing Cat 5e cable. “I recommend that you install Cat 6 at a minimum today”. The cable will last much longer and support higher speeds that Cat 5e just cannot support. Category 8 cabling is coming to data centers to support 40GBase-T.

Power over Ethernet plugfest planned to happen in 2015 for testing power over Ethernet products. The plugfest will be focused on IEEE 802.3af and 802.3at standards relevant to IP cameras, wireless access points, automation, and other applications. The Power over Ethernet plugfest will test participants’ devices to the respective IEEE 802.3 PoE specifications, which distinguishes IEEE 802.3-based devices from other non-standards-based PoE solutions.

Gartner expects that wired Ethernet will start to lose it’s position in office in 2015 or in few years after that because of transition to the use of the Internet mainly on smartphones and tablets. The change is significant, because it will break Ethernet long reign in the office. Consumer devices have already moved into wireless and now is the turn to the office. Many factors speak on behalf of the mobile office.  Research predicts that by 2018, 40 per cent of enterprises and organizations of various solid defines the WLAN devices by default. Current workstations, desktop phone, the projectors and the like, therefore, be transferred to wireless. Expect the wireless LAN equipment market to accelerate in 2015 as spending by service providers and education comes back, 802.11ac reaches critical mass, and Wave 2 products enter the market.

Scalable and Secure Device Management for Telecom, Network, SDN/NFV and IoT Devices will become standard feature. Whether you are building a high end router or deploying an IoT sensor network, a Device Management Framework including support for new standards such as NETCONF/YANG and Web Technologies such as Representational State Transfer (ReST) are fast becoming standard requirements. Next generation Device Management Frameworks can provide substantial advantages over legacy SNMP and proprietary frameworks.

 

U.S. regulators resumed consideration of mergers proposed by Comcast Corp. and AT&T Inc., suggesting a decision as early as March: Comcast’s $45.2 billion proposed purchase of Time Warner Cable Inc and AT&T’s proposed $48.5 billion acquisition of DirecTV.

There will be changes in the management of global DNS. U.S. is in the midst of handing over its oversight of ICANN to an international consortium in 2015. The National Telecommunications and Information Association, which oversees ICANN, assured people that the handover would not disrupt the Internet as the public has come to know it. Discussion is going on about what can replace the US government’s current role as IANA contract holder. IANA is the technical body that runs things like the global domain-name system and allocates blocks of IP addresses. Whoever controls it, controls the behind-the-scenes of the internet; today, that’s ICANN, under contract with the US government, but that agreement runs out in September 2015.

 

1,044 Comments

  1. Tomi Engdahl says:

    High-Speed Converters Aid Record Terabit Field Trial
    http://www.eetimes.com/document.asp?doc_id=1327044&

    Ultra high-speed digital-to-analog (DAC) and analog-to-digital (ADC) converters from Socionext Inc. (Yokohama, Japan) have been used in communications field trial that achieved 38.4Tbps over 762 kilometers of optical fiber.
    Socionext Inc. is a chip company formed through the merger of the system LSI businesses of Fujitsu Ltd. and Panasonic Corp. that started operations on March 1, 2015.

    The field trial took place over a 762-kilometer length fiber optic link from Lyon to Marseille and back to Lyon that is part of the Orange optical network. Socionext converters featuring sample rates of up to 92Gsamples/s and high analog bandwidth were used within coherent receivers and transmitters for the record-breaking data communications.

    A team comprising Orange, Coriant, Ekinops and Keopsys successfully demonstrated the highest ever C-band transmission capacity using 24 by 1 Tbps/DP-16 QAM, 32 by 1 Tbps /DP-32 QAM and 32 by 1.2 Tbps/DP-64 QAM modulation formats in a ‘live’ networking environment.

    Reply
  2. Tomi Engdahl says:

    Ethernet Standards Ramp Up For Faster IT
    http://www.eetimes.com/document.asp?doc_id=1327039&

    The Ethernet Alliance and UNH-IOL hosted a plugfest to test interoperability of 40 and 100 GbE, define the 25 GbE standard, and help businesses deploy new Ethernet technologies more quickly.

    In pursuit of this robust architecture, the Ethernet Alliance held its largest plugfest to date at the University of New Hampshire InterOperability Laboratory (UNH-IOL), last week. Engineers from 23 companies in the networking ecosystem — including Arista, Cisco, Dell, Hitachi, and Intel — gathered at the UNH-IOL’s 32,000-square-foot facility to work the kinks out of their 40 and 100 Gigabit Ethernet (GbE) technologies and to provide input into the developing standard for 25 GbE.

    Reply
  3. Tomi Engdahl says:

    US runs out of IPv4 internet addresses and declares the internet full
    Laugh along if you feel that IP-less is the truth
    http://www.theinquirer.net/inquirer/news/2416244/us-runs-out-of-ipv4-internet-addresses-and-declares-the-internet-full

    THE US has finally run out of IPv4 internet addresses. The American Registry of Internet Numbers (ARIN) confirmed that it opened its waiting list yesterday, the penultimate continent to do so.

    This doesn’t mean that all of a sudden the IPv4 network is at breaking point. Numbers are given out in chunks, and it is these chunks that are running low. In other words it’s not you that needs to worry, it’s your ISP.

    There are still smaller blocks of 512 and 256 addresses available, but that’s nowhere near enough to satisfy the demand.

    However, in the true spirit of the industry, the problem is being left to the last minute and most ISPs are not even offering an IPv6 service yet.

    Reply
  4. Tomi Engdahl says:

    Is the End of IPv4 at Hand? Not Anytime Soon…
    IPv4 still has a long life ahead
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327058&

    The American Registry for Internet Numbers (ARIN) has sent out an alert saying its share of the dwindling IPv4 addresses is near rock bottom, so it is going to a stricter procedure for approving requests for those that are left.

    An article in the Washington Post on the ARIN announcement seems to imply that major changes are imminent, but that we are not to worry because IPv6 is available to save us and give us all the URL addresses we need. Only the second part of that is true. Except for many mobile smartphone and tablet users whose wireless service providers are just now making the shift, the transition to IPv6 will take years and even decades. Many existing Internet Service Providers with IPv4 will be around for a long time. Indeed, there is already a growing market for buying used and highly desirable IPv4 addresses.

    IPv6 with its 340 trillion trillion trillion possible unique combinations has been sitting in the wings as a replacement for IPv4 since about 2000.

    But few if any large organizations with the bankroll to establish a presence on the Internet have felt it was economically viable to invest in it until recently.

    Many of those most active in making the shift are in mobile smartphone markets, including AT&T, Google, T-Mobile, and Verizon. Most of them are still only half-way through their deployment and testing on IPv6.

    Even with the number of its remaining unique addresses drying up, IPv4 still accounts for 93 percent of worldwide Internet traffic.

    The reason there is not a rush on IPv6 is simple: economics. Most local, regional, and in some cases national Internet Service Providers are not able or are unwilling to pay the expense of transitioning from their existing base of IPv4-based routers, switches, and servers, except on a slow and years-long incremental basis. In the United States, only the largest of ISPs have committed to the transition

    But most second and third tier regional, statewide, and local ISPs who have major investments in IPv4 are not rushing to make the shift. Engineers and technicians at the several regional ISPs I have dealt with directly over the last decade or so point out that there is no compelling end-use application that cannot be done with existing IPv4. Since about 2000, through the use of such traditional techniques as the use of several levels of subdomains, dynamic rather than static network address translation (NAT), destination and stateful NAT, NAT loopbacks, port address translation, and Internet connection sharing, they have been able to keep up with demands, not only for more bandwidth, but more features and flexibility.

    Where such enhancements and workarounds put additional load on their network hardware, IPv4 Internet Service providers are shifting to the use of switches and routers based on more powerful and lower cost multicore processor designs. Also aiding their efforts at improving IPv4 is the shift to systems based on software defined networking (SDN). Such routers and switches depend not on dedicated hardware for separate network functions, but on software-based network function virtualization (NFV) allowing lightning -fast reconfiguration of a variety of network elements.

    While such enhancements of existing IPv4-based systems involve additional investment in hardware and software, this can be done at a lower cost, and on a more piecemeal, as needed, basis rather than by replacing their existing IPv4 systems with IPv6.

    In the U.S. that transition will occur more quickly only if one of three things happen: First, a dramatic use case for IPv6 emerges that triggers a demand from users of the Internet, causing IPv4-based ISP companies to make the shift; second, government action is taken either in the form of significant incentives or through a direct statutory requirement; and third, the economics of maintaining IPv4 becomes unviable. Nothing I see now or in the near future makes any of these likely any time soon.

    Reply
  5. Tomi Engdahl says:

    #FishermenNeedTwitter: Vodafone extends 4G to Iceland’s coastal ports
    Feel free to phish while you fish, fine fellows
    http://www.theregister.co.uk/2015/07/08/vodafone_iceland_extends_4g_coverage_to_fishing_industry/

    Vodafone Iceland is feeding 4G to the fishes and adding significant coverage for the maritime industry.

    Industry website Telecompaper reports that the network has just added the areas of Blönduós, Reyðarfjörður, and the town Hofn to the places which get 4G coverage.

    It’s seemingly aimed at seafarers, particularly fishermen.

    Iceland isn’t the only country where there is significant coverage at sea. In the early days of GSM, Greece had way more coverage between the Greek islands than in the mountains of the mainland.

    Reply
  6. Tomi Engdahl says:

    TfL to splash £400m on networking deal, despite GDS opposition
    Opts for unproved ‘tower model’, which may or may not be govt policy
    http://www.theregister.co.uk/2015/07/08/tfl_to_splash_400m_in_network_deal/

    Transport for London (TfL) has opened its wallet and invited suppliers to reach in and grab £400m under a networking deal.

    The scope of the deal is for a single supplier to provide access network and wide area network services as a managed service.

    It’ll be a “major element” of the department’s plan to disaggregate its current Fujitsu contract and split the various components into “towers”.

    End-user computing and hosting are expected to comprise the rest of the contract, with a tender for a “service integration and management” (SIAM) provider also underway.

    Reply
  7. Tomi Engdahl says:

    The modest father of SMS, who had much to be modest about
    Only the true inventor would deny his invention? Erm, well, not quite
    http://www.theregister.co.uk/2015/07/08/the_real_history_of_sms/

    Matti Makkonen died last week and was celebrated as the father of SMS. He’s been described as being too modest to acknowledge his involvement. It seems, however, that the story of how Short Messaging came to be is far more complicated than we originally thought, and the system has many fathers.

    In fact, not only did Makkonen not invent SMS, he also reneged on an agreement with those who did to stop claiming that he did.

    Makkonen’s lie – inadvertent or not – saw him recognised by the Finnish Government as one of the Great Finns, alongside composer Sibelius, and he was awarded the 1999 Economist Innovation Award alongside Tim Berners-Lee and Bill Gates.

    It appears that Makkonen, then a local Nokia manager, went out for a pizza with a journalist from a local Finnish paper. During the interview, he made a casual remark that he had imagined the potential of sending text messages on mobile phones. The journalist then embellished the story that he was the “father of SMS”.

    As much as he may have had that thought, it’s clear that he didn’t ever submit it to anyone working on GSM standards. As turgid as all the standards documentation is, it’s amazingly comprehensive on who attended which meetings, which are minuted in detail.

    There is a huge irony in this: the true creators of SMS remain so modest about their invention that they have been usurped in the public eye by someone who didn’t do the work and yet has been accused of being too modest.

    SMS Development and Standardisation Milestones
    http://www.gsm-history.org/9.html

    Reply
  8. Tomi Engdahl says:

    Greenpeace fingers YouTube, Netflix as threat to greener Internet
    http://www.computerworld.com/article/2920551/sustainable-it/greenpeace-fingers-youtube-netflix-as-threat-to-greener-internet.html

    The next time you watch “House of Cards” on Netflix, think about the impact you might be having on the environment.

    As the Internet powers ever more services, from digital video to on-demand food delivery, energy use in data centers will rise. To reduce their impact on the environment, companies like Apple, Google and Facebook have taken big steps to power their operations with renewable energy sources like hydro, geothermal and solar.

    But despite those efforts, the growth of streaming video from the likes of Netflix, Hulu and Google’s YouTube presents a pesky challenge to the companies’ efforts to go green, according to a report Tuesday from Greenpeace.

    “The rapid transition to streaming video models, as well as tablets and other thin client devices that supplant on-device storage with the cloud, means more and more demand for data center capacity, which will require more energy to power,” the report’s authors wrote.

    It might seem that online services, like video streaming, would reduce carbon footprint versus, say, driving to a movie theater. But by enabling much higher levels of consumption, the shift to digital video may actually be increasing the total amount of electricity consumed, and the associated pollution from electricity generation, the report said.

    “Unless leading Internet companies find a way to leapfrog traditional, polluting sources of electricity, the convenience of streaming could cause us to increase our carbon footprint,” wrote the authors of the report, “Clicking Green: A Guide to Building the Green Internet.”

    Clicking Clean:
    A Guide to Building the Green Internet
    http://www.greenpeace.org/usa/Global/usa/planet3/PDFs/2015ClickingClean.pdf

    The internet is rapidly working its way into nearly
    every aspect of the modern economy. Long
    unshackled from our web browser, we now find
    the internet at every turn, and ready to play a
    bigger role in our lives with each passing day.
    Today, the internet is rapidly transforming how
    you watch TV. Tomorrow, the internet may be
    driving your car and connecting you to high-
    definition video from every corner of the planet
    via your watch.

    The magic of the internet seems almost limitless. But each
    new internet enabled magic trick means more and more
    data, now growing over 20% each year

    While there may be significant energy efficiency gains from
    moving our lives online, the explosive growth of our digital
    lives is outstripping those gains. Publishing conglomerates
    now consume more energy from their data centers than
    their printing presses. Greenpeace has estimated that the
    aggregate electricity demand of our digital infrastructure
    back in 2011 would have ranked sixth in the world among
    countries.

    The rapid transition to streaming video models,
    as well as tablets and other thin client devices that supplant
    on-device storage with the cloud, means more and more
    demand for data center capacity, which will require more
    energy to power.

    The transition to online distribution models, such as video
    streaming, appears to deliver a reduction in the carbon
    footprint over traditional models of delivery. However,
    in some cases, this shift may simply be enabling much
    higher levels of consumption, ultimately increasing the
    total amount of electricity consumed and the associated
    pollution from electricity generation.

    Reply
  9. Tomi Engdahl says:

    Time Warner Cable Owes $229,500 To Woman It Would Not Stop Calling
    http://yro.slashdot.org/story/15/07/08/131251/time-warner-cable-owes-229500-to-woman-it-would-not-stop-calling

    Reuters reports that a Manhattan federal judge has ruled Time Warner Cable must pay Araceli King $229,500 for placing 153 automated calls meant for someone else to her cellphone in less than a year, even after she told them to stop. King accused Time Warner Cable of harassing her

    Telephone Consumer Protection Act, a law meant to curb robocall and telemarketing abuses, because it believed it was calling Perez, who had consented to the calls. In awarding triple damages of $1,500 per call for willfully violating that law, U.S. District Judge Alvin Hellerstein said “a responsible business” would have tried harder to find Perez and address the problem.

    Time Warner Cable owes $229,500 to woman it would not stop calling
    http://www.reuters.com/article/2015/07/07/us-twc-robocalls-idUSKCN0PH2H920150707

    King, of Irving, Texas, accused Time Warner Cable of harassing her by leaving messages for Luiz Perez, who once held her cellphone number, even after she made clear who she was in a seven-minute discussion with a company representative.

    The calls were made through an “interactive voice response” system meant for customers who were late paying bills.

    “Companies are using computers to dial phone numbers,” King’s lawyer Sergei Lemberg said in a phone interview. “They benefit from efficiency, but there is a cost when they make people’s lives miserable. This was one such case.”

    Reply
  10. Tomi Engdahl says:

    Four out of five e-mail is junk

    Last year, the world was sent to about 35 thousand billion trillion, or email. Juniper Research found that 80 per cent of this amount, that is, four out of five e-mail was spam.

    This year, all electronic messages – e-mails, tesktiviestien, multimedia messaging and a variety of instant messaging – increasing the amount of 94.2 trillion. In 2019, the volume has increased 160 trillion in the message.

    A variety of social media share this message of the pie is growing all the time. For example, Facebook currently sent over 5.8 billion posting, every day. And the number is growing all the time.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3058:nelja-viidesta-sahkopostista-on-roskaa&catid=13&Itemid=101

    Reply
  11. Tomi Engdahl says:

    Bits traveled 12 000 kilometers of fiber

    University of California, San Diego, sleepers photonics researchers have made ​​a major breakthrough taking place in the fiber data transmission. They managed to interpret the bits even after they had traveled 12 000 kilometers of optical fiber distance.

    A significant result is that the bits are passed distance of only with standard fiber amplifiers. According to scientists, the result shows that the fiber is no need to spend so much regeneration chips in long distances, whereas previously assumed.

    The breakthrough is based on the researchers developed broadband “taajuuskampoihin” (frequency combs). These pass through the various bits of crosstalk channels in the fiber in a predictable manner, which allows the identification bits and the identification of the receiver.

    Thanks to the new waveforms researchers were able to increase the transmission power link 20-fold and still be interpreted as the bits of the received signal correctly.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3023:bitit-kulkivat-kuidussa-12-000-kilometria&catid=13&Itemid=101

    Reply
  12. Tomi Engdahl says:

    5G network architecture being developed under the leadership of Nokia

    European companies, operators and research institutions have set up a new project, which will be developed for the new network architecture for future 5G connections. 5G NORMA (5G Novel Radio Multiservice adaptive Network Architecture) is largely Nokia Networks -driven project.

    5G NORMA 5GPPP is part of the joint project. It defines the architecture of future 5G networks, both radio and core network concerned. The new consortium is working for 2.5 years, so ready to draft new 5G network architecture must be accomplished by the end of 2017.

    The project partners are also involved in the Nokia soon COMBINING Nokia, Alcatel-Lucent, NEC, Atos, operators Deutsche Telecom, Orange and Telefonica, as well as research institutions in the University of Kaiserslautern, the London Kings College and the Carlos III University of Madrid.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3036:5g-verkon-arkkitehtuuria-kehitetaan-nokian-johdolla&catid=13&Itemid=101

    Reply
  13. Tomi Engdahl says:

    Finland, the world’s smallest satellite module

    Salo based Satel has introduced a new satellite receiver, which it boasts the world’s smallest UHF band. Satelline module TR4 transmission power of 1000 milliwatts, and it supports the Pacific Crest-, Trimble and Satel protocols.

    The module is very compact. 56 x 36 -millinen radio transmits data 38 400 bits per second. Six millimeters thick, it is easy to integrate to smaller device casings.

    TR4 module supports previous modules Satel ie Satelline Easy and 3AS modules familiar features, such as channel scanning, error correction, and broad protocol support.

    Satel says later that it would launch features a physically identical modules, which support the license-free frequencies are increasing in popularity. These include 869 MHz in Europe and 915 megahertz American market.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3043:suomesta-maailman-pienin-satelliittimoduuli&catid=13&Itemid=101

    Reply
  14. Tomi Engdahl says:

    Record and analyze RF and wireless signals
    http://www.edn.com/electronics-products/electronic-product-reviews/other/4439786/Record-and-analyze-RF-and-wireless-signals?_mc=NL_EDN_EDT_EDN_consumerelectronics_20150708&cid=NL_EDN_EDT_EDN_consumerelectronics_20150708&elq=a002b2677df348baba716c9e25cba105&elqCampaignId=23814&elqaid=26895&elqat=1&elqTrackId=18e52d3c5bea4d71bf5e563d14a1b256

    Cellular service providers, wireless product manufacturers, makers of military/aerospace communications products, and others often need to record wireless signals for verification, compliance and troubleshooting purposes. The RP-6100 series from Averna, with two-channel and four-channel versions, lets you store hours of signal data and play it back for analysis.

    Cellular service providers, wireless product manufacturers, makers of military/aerospace communications products, and others often need to record wireless signals for verification, compliance and troubleshooting purposes. The RP-6100 series from Averna, with two-channel and four-channel versions, lets you store hours of signal data and play it back for analysis.

    Each instrument comes preloaded with Averna’s RF Studio software for signal storage, viewing and analysis. RF measurements include noise figure, power, and spectrum.

    The recorders can combine with Skydel GNSS Simulator software to simulate Global Navigation Satellite System signals for testing receivers.

    Reply
  15. Tomi Engdahl says:

    Intel Unite promises low-cost business collaboration
    Compute power in the conference room lets firms make use of existing resources
    http://www.theinquirer.net/inquirer/news/2416770/intel-unite-promises-low-cost-business-collaboration

    INTEL IS PUSHING the benefits of its Unite technology for cost-effective collaboration, enabling employees to easily set up a meeting where they can share screens and other content using a variety of endpoint devices and existing meeting room resources.

    Unite was announced by Intel at the Computex show in Taiwan last month, and is designed to drive compute power into the conference room to enable new capabilities, as well as allowing businesses to make use of existing resources such as projectors and screens that they may already have installed.

    The technology comprises Unite software that the customer purchases ready installed on a mini PC based on Intel’s Core vPro platform. This essentially acts as the central hub for meetings, enabling users to view content or share their own screen via wireless connectivity, and even include colleagues connecting remotely via the internet.

    Chad Constant, director of marketing for Intel’s business client platforms, said that it is designed to offer a cost-effective solution for collaboration that lets customers reuse assets, rather than being a rival for technologies such as Microsoft’s Surface Hub hardware coming later this year.

    Reply
  16. Tomi Engdahl says:

    EU net neutrality deal miraculously keeps everyone happy
    Despite zero-rating omission, text closes loopholes and promises level playing field
    http://www.theregister.co.uk/2015/07/09/net_neutrality_deal_closes_loopholes_promises_level_playing_field_without_scaring_the_horses/

    Thanks to a 10-hour meeting, it appears that EU negotiators have done the unexpected: created net neutrality rules that keep both digital rights activists and telco operators happy.

    Last Tuesday, after three months of toing and froing between member states and the European Parliament, a last-ditch political deal was pushed through on the core text of the so-called Telecoms Package.

    However, as expressed by all sides, the devil would be in the detail of the recitals (explanatory guidelines for implementation). On Wednesday, those recitals were published and met with surprising enthusiasm.

    Although the phrase “net neutrality” does not actually appear in the text, Estelle Massé of digital rights group Access said that the text of Article 3.3 of the proposal is essentially the definition of net neutrality. Under the proposal, internet service providers would be banned from blocking or throttling internet speeds for certain services for commercial reasons.

    “The text seems workable,” said Massé. “Recital 11 on specialised services criteria seems fair. Providers cannot prioritise content, so there won’t be a fast lane and a slow lane, which is what we feared. However, enforcement will be key and because the text is not quite as clear as we would like, there will be a lot of room for interpretation,” she said.

    The text acknowledges that “there is demand on the part of content, applications and services providers, as well as on the part of end-users, to be able to provide electronic communication services other than internet access services, for which require specific quality of service levels”

    Internet traffic can be managed to deal with “temporary or exceptional congestion”. In the first draft of the text, there was no explanation as to what constitutes temporary or exceptional, but the guidelines published today nail this down more firmly.

    Temporary congestion may be caused by physical obstructions, lower indoor coverage, technical failure – such as a service outage due to broken cables or other infrastructure elements – unexpected changes in routing of traffic or “large increases in network traffic due to emergency or other situations beyond the control of the internet access service provider”.

    However, if this congestion is recurring regularly it is not something that can be deemed “exceptional”,

    The only controversial area that has not been addressed by the new text is the issue of zero-rated services. These are services offered to customers for free, thanks to a deal between the service provider – for example Facebook, Spotify or Netflix – and the telco provider to pay for the service at source.

    Reply
  17. Tomi Engdahl says:

    ICANN running the global internet? It’s gonna be OK, it’s gonna be OK, US Congress told
    IANA power transfer questioned during Washington DC hearing
    http://www.theregister.co.uk/2015/07/08/congressional_iana_hearing/

    US Congress was soothed Wednesday morning over the fundamental shift of the IANA contract, which oversees the running of the global internet, from the US government to non-profit domain overlord ICANN, but significant questions remain.

    In a hearing called by the House Energy and Commerce Committee, ICANN’s smooth-talking CEO Fadi Chehade and the US official in charge of the transition, assistant commerce secretary Larry Strickling, both assured representatives that the process was going well and that their concerns were being heard and acted upon.

    With the expectation that the Dotcom Act will pass into law, both Chehade and Strickling will soon require Congressional sign-off on the transfer of power that will have been more than two years in the making.

    From the House of Representatives’ perspective, there are four main issues:

    Whether ICANN will remain headquartered in the US.
    Whether other governments will be prevented from taking control.
    Whether ICANN is enforcing its contracts sufficiently.
    Whether ICANN will be made sufficiently accountable before the IANA transition occurs.

    The first two are easy. No one has seriously suggested that ICANN move outside the United States, mostly because they know the US government would not approve the IANA transition if they did.

    Chehade repeatedly mentioned the fact that ICANN now has more than 30 compliance staff when only a few years ago it had just six, but he was notably unable to provide much of a defense of ICANN’s compliance department.

    Reply
  18. Tomi Engdahl says:

    Project Fi Review: Google Masters Wi-Fi Calling, but Needs Better Phones
    Can the search giant change the wireless business like it’s changed high-speed Internet?
    http://www.wsj.com/article_email/project-fi-review-google-masters-wi-fi-calling-but-needs-better-phones-1436285959-lMyQjAxMTE1NTAwNzAwMDcyWj

    For millions of us, Google is the backbone of our digital lives. So it’s a little incongruous that to get to its many services, we generally go through carriers such as Comcast, Verizon or AT&T. In a few towns across America, Google has eliminated the middleman and started providing broadband service. Now it’s taking on wireless with Project Fi.

    An affordable alternative to many mainstream and discount carriers, Project Fi routes calls and data through Wi-Fi whenever possible (hence the name). It roams on T-Mobile and Sprint networks when no Wi-Fi can be found. Dead simple to set up and use, its rates start at $30 a month. It could save you some money if you accept some big limitations.

    It only works with one phone, for starters. The Nexus 6, built by Motorola in collaboration with Google

    Service is also limited to invites right now.

    Reply
  19. Tomi Engdahl says:

    Neil King / Arabian Business:
    Sony Mobile CEO Hiroki Totoki on streamlining the unit, IoT strategy, and why the company won’t exit from its mobile business — Billion dollar turnaround: Sony Mobile CEO — The phrase ‘thrown in at the deep end’ might have been invented for Hiroki Totoki.

    Billion dollar turnaround: Sony Mobile CEO
    http://www.arabianbusiness.com/billion-dollar-turnaround-sony-mobile-ceo-598355.html

    Totoki admits, however, that rival companies and brands will always mean that competition is rife.

    “Yes, the competition has become severe,” he says.

    “The smartphone device consists of a battery and a screen and chips. These are the main parts of a smartphone, and people can easily make them now.

    “But it is the user experience that is not the same. Even if the device is the same, the user experience is different.”

    User experience is one of the major talking points for the mobile industry, alongside topics such as wearables and the Internet of Things.

    But another conversation that has intensified in recent months is that surrounding 5G, and heavyweights such as Sony Mobile are expected to be on the front lines of developments.

    So how close is the company to launching a 5G product?

    “The technology roadmap is there,” Totoki states confidently.

    “The use of the technology will really be country by country,” he continues. “For example, in Japan, 2020 is an Olympic year — it will be the second Olympics in our history and a very important year for us. Leading up to 2020, the government and major operators would like to demonstrate 5G technology in Japan. That’s the roadmap we have in Japan, and I’m sure other countries will have theirs as well.”

    Reply
  20. Tomi Engdahl says:

    What Goes Into a Decision To Take Software From Proprietary To Open Source
    http://news.slashdot.org/story/15/07/06/1950215/what-goes-into-a-decision-to-take-software-from-proprietary-to-open-source

    It’s not often that you get to glimpse behind the curtain and see what led a proprietary software company to open source its software. Last year, the networking software company Midokura made a strategic decision to open source its network virtualization platform MidoNet, to address fragmentation in the networking industry

    Scalable open virtual networking with MidoNet
    http://opensource.com/business/15/1/scalable-open-virtual-networking-midonet

    Networking is an important part of any modern datacenter. As open source continues to grow in virtualization solutions, virtualized networking is an important part of the picture. MidoNet, an open source network virtualization platform for Infrastructure-as-a-serivice (IaaS) clouds like OpenStack cloud software, is gaining traction as a way to implement networking solutions.
    What is MidoNet?

    MidoNet is a production-grade network virtualization solution that allows operators to build isolated networks in software that overlays the existing hardware-based network infrastructure. It addresses the shortcomings in OpenStack Neutron by replacing the default Open vSwitch (OVS) plugin with the MidoNet plugin.

    Modern distributed applications have unique networking and security requirements to ensure application availability and performance.

    It is often a challenge for network administrators to keep up with new infrastructure requests or make changes to support rapid prototyping and continuous delivery.

    Why open source?

    The benefits of open source software development for customers are evident. Cross-vendor collaborative engineering led to technological breakthroughs, code stability through peer reviews and rapid issue identification and resolution. Midokura made a strategic decision to open source its software to address the fragmentation in the networking industry. The decision to give away fours years of engineering to the open source community was deliberate with far reaching implications.

    Midokura Enterprise MidoNet is already a proven, scalable virtual networking solution for leading service providers like KVH Asia and Zetta.IO. It was apparent to Midokura that multiple networking vendors were trying to sell proprietary solutions and have little to no incentive to invest in the default configuration in OpenStack.

    MidoNet is modeled after other open source communities like Ubuntu and OpenStack. MidoNet gained initial support from the leading semiconductor vendors active in the Linux open source communities like Fujitsu and Broadcom and Ethernet and infiniband vendors like Mellanox. Industry analysts made commentary about the adoption of OpenStack closely mirroring to that of Linux. MidoNet parallels OpenStack adoption as evidenced by the top three Linux distributions (Red Hat, Canonical/Ubuntu, and SUSE) getting onboard at the onset. Much like some of the other open source projects, it is no surprise to see the first wave of adopters coming from large-scale cloud providers like IDC Frontier (subsidiary of Yahoo Japan) and HP Helion Eucalyptus and regional cloud providers like Zetta.io in Norway and KVH Asia.

    Why we changed our software from proprietary to open source
    https://enterprisersproject.com/article/2015/7/why-we-changed-our-software-proprietary-open-source

    Why would a software company choose to change its product from proprietary to open source? It turns out there are many good reasons, says Dan Mihai Dumitriu, CEO and CTO of networking software company Midokura. In this interview with The Enterprisers Project, Dumitriu explains the benefits.

    TEP: How did you come to decide to take your software open source?

    Dumitriu: We saw a movement toward open sourcing infrastructure technology such as Infrastructure as a Service (IaaS) with OpenStack and Eucalyptus. The IaaS cloud software is the first time we see open source lead the category over the proprietary solutions. For example, in the operating system category, the adoption of Linux followed Microsoft. To us, this was an indicator that a sea of change was happening.

    With the adoption of open IaaS cloud, we also saw Ceph gaining user traction and users showing a strong preference for open source storage solutions over proprietary in block storage. We took a step back to consider our own market and saw a gap for an open source networking solution. So we took the leap by open sourcing the code behind MidoNet so that open source MidoNet can take the lead in the virtual networking category, much as OpenStack and also Ceph are doing.

    TEP: What drawbacks did you have to consider in your decision to go open-source?

    Dumitriu: Being proprietary was a barrier to adoption. Users could not evaluate the software without a legal agreement and speaking with a sales rep. With cutting edge technology, users have an expectation to try the software and to understand how things work under it. They are also generally reluctant about bringing new technologies with unknown security risks into their environment. While all software has known vulnerabilities, with open source software, users can review the source code, run security audits, assess the exposure for themselves, and then decide.

    TEP: Once you’d decided to go open source, how was that communicated to your team?

    Dumitriu: The decision to go open source was a culmination of several years of deliberation and deep involvement with our engineers. Engineers at all levels took part in our business, marketing and technical discussions. We ensured that the final decision was not a surprise to anyone as the entire company was involved in all aspects.

    TEP: How did that decision change how your engineers do their work?

    Dumitriu: From a technical standpoint, engineers work pretty much the same as before, given that we modeled our software development processes similarly to OpenStack. Prior to our decision to open source, we had already adopted tools commonly favored by the open source community because they were helpful for managing our globally distributed team.

    TEP: What advice would you pass along to other CTOs about going open source?

    Dumitriu: CTO to CTO: Get ready for things to slow down when you engage the community, but if you’re willing to forgo the immediate returns and see the big picture, with an active community, you will be far more successful in five years than if you go it alone.

    Reply
  21. Tomi Engdahl says:

    IBM demos first fully integrated monolithic silicon photonics chip
    Electro-optical chips could bring big bandwidth gains and lower power consumption.
    http://arstechnica.co.uk/information-technology/2015/05/ibm-demos-first-fully-integrated-monolithic-silicon-photonics-chip/

    At a conference in the US, IBM has demonstrated what it claims to be the first fully integrated wavelength multiplexed silicon photonics chip. This is a big step towards commercial computer chips that support both electrical and optical circuits on the same chip package, and ultimately the same die. Optical interconnects and networks can offer much higher bandwidth than their copper counterparts, while consuming less energy—two factors that are rather beneficial as the Internet grows and centralised computing resources continue to swell.

    The first step is to bring optical channels onto the motherboard, then onto the chip package, and ultimately onto the die so that electrical and optical pathways run side-by-side at a nanometer scale.

    IBM’s latest nanophotonic chip belongs to the second category: it can be placed on the same package as an electronic chip, bringing the electro-optical conversion a lot closer to the logic. It’s important to note that the lasers themselves are still being produced off-chip, and brought into the nanophotonic chip through the “laser input ports” that you can see in the diagram above. Once the chip has been fed some lasers, there are four receive and transmit ports, each capable of transporting data at 25 gigabits per second, which are bundled up into 100Gbps channels via wavelength multiplexing.

    That’s just this chip, though; IBM says that, in theory, its technology could allow for chips with up to eight channels. 800Gbps from a single optical transceiver would be pretty impressive.

    Reply
  22. Tomi Engdahl says:

    In Search of 3D Printed Fiber Optics
    http://www.eetimes.com/document.asp?doc_id=1327107&

    Fiber-optic cables are constructed by pulling heated glass into fine strands and surrounding them with other materials. Today’s manufacturing techniques can provide consistent material composition in a fiber. But, perhaps having the ability to create new compositions in fibers can lead to faster communications and new applications.

    A research team at the University of Southampton is investigating a new way to construct optical fibers using 3D printing. This technique could lead to optical fibers that use a variety of glass materials, some of which could lead to better reflectivity within a fiber and thus result in lower losses.

    “We will design, fabricate and employ novel Multiple Materials Additive Manufacturing (MMAM) equipment to enable us to make optical fibre preforms (both in conventional and microstructured fibre geometries) in silica and other host glass materials,” said Professor Jayanta Sahu of the University of Southampton in a press release. “Our proposed process can be utilised to produce complex preforms, which are otherwise too difficult, too time-consuming or currently impossible to be achieved by existing fabrication techniques.” The figure below shows a silia fiber drawing tower at The University of Southampton’s Zepler Institute.

    3D printing will let the researchers build an optical fiber using ultra-pure glass powder, building the fiber in layers. The research team admits that the high melting temperature on silica (2000°C) will bring challenges to the construction.

    Assuming that the team is successful at 3D-printing optical fibers, much testing will be needed. For these new fibers to be practical, they will need measurements for optical power loss, dispersion, physical strength, mounting to connectors, and a host of others.

    Reply
  23. Tomi Engdahl says:

    Bell Labs Plugs in to the Network
    Storied lab is back on track, president says
    http://www.eetimes.com/document.asp?doc_id=1327100&

    Following a lull in the 1980s and early 1990s, and a shift toward business-driven research, “Bell Labs feels like it’s back on track,” President Marcus Weldon told EE Times from his Murray Hill, New Jersey office. The multi-Nobel Prize winning research institute has kicked networking research into high gear under Alcatel-Lucent and is about to get a boost from a mega-merger with Nokia Networks.
    Bell Labs’ physicists, chemists and engineers should “think freely about problems that people think are too complex to solve,” Weldon said. Many of those problems will arise over the next 10 years in the areas of wireless and networking infrastructure thanks to a lack of spectrum and an ever-increasing number of connections. As a result, the network will be the next big thing in innovation, Weldon said.

    Last year, the labs announced its researchers sent data at 10 Gbits/second over existing copper wires. The XG-Fast project was the first major success in a 13-project series that aims to solve networking problems ten years out.

    “We’ve hit the physical limits of almost all the technology we have in the way its deployed today,” he said. “We’re going to move away from an [architecture] model where things are relatively centralized and we’re going to have to distribute them. As we go there, we’re going to bring the cloud with us,” he added.

    “One area of commonality is in wireless research. In the era where Huawei claims it’s investing thousands of people into 5G, it’s not even clear that sum of [Nokia and Bell Labs’] research groups would…keep up with the Chinese,” he said.

    The Chinese government allows foreign vendors to claim a maximum 11% of the wireless market; Alcatel-Lucent and Nokia each have an 11% share and it’s unknown whether the new company could have a 22% share. Weldon expects that, if the new telecoms giant couldn’t have a larger share, Samsung and Ericsson may pick up the slack.

    Reply
  24. Tomi Engdahl says:

    Erica Swanson / Google Fiber Blog:
    Google to offer free internet access in select public housing areas in all Google Fiber markets, as part of ConnectHome, Obama’s low-cost Internet program

    Bringing Internet access to public housing residents
    http://googlefiberblog.blogspot.fi/2015/07/connecthome-google-fiber.html

    As many as 26% of households earning less than $30,000 per year don’t access the Internet, compared to just 3% of adults with annual incomes over $75,000. Google Fiber is working to change that. Today, in all of our Google Fiber markets, we’re launching a program to connect residents in select public and affordable housing properties for $0/month with no installation fee.

    This initiative is part of ConnectHome, a bold new program launched by the White House and U.S. Department of Housing and Urban Development (HUD) that aims to bring Internet connectivity to more school-aged children and families living in HUD-assisted housing in 28 communities across the country.

    We realize, though, that providing an Internet connection is just one piece of the puzzle. People can only take advantage of the many benefits of the web when they understand why it matters and know how to use it. That’s why we’ll also partner with ConnectHome and local community groups to develop basic computer skills training and create computer labs to host these trainings in each of our Fiber markets.

    Reply
  25. Tomi Engdahl says:

    Don Reisinger / CNET:
    Obama unveils ConnectHome, a pilot program launching in 27 cities, to get low-income households online; some communities will get broadband access for no charge

    Obama unveils ConnectHome to get low-income households online
    http://www.cnet.com/news/obama-unveils-connecthome-to-get-low-income-households-online/

    The pilot program will launch in 27 cities and one tribal nation and reach more than 275,000 low-income households. Some communities will receive broadband connections at no charge.

    The pilot program is part of the Obama administration’s continuing effort to close the digital divide, ensuring that everyone, regardless of income, has access to high-speed Internet service.

    According to the Pew Research Center, 92 percent of households with incomes between $100,000 to $150,000 have broadband access, but less than half of households below the $25,0000 income level can tap into high-speed Internet. The American Library Association, which applauded the president’s move on Wednesday, said 5 million households with school-age children do not have high-speed Internet service.

    “While nearly two-thirds of households in the lowest-income quintile own a computer, less than half have a home internet subscription,” the White House said in a statement on Wednesday. “While many middle-class U.S. students go home to Internet access, allowing them to do research, write papers, and communicate digitally with their teachers and other students, too many lower-income children go unplugged every afternoon when school ends. This ‘homework gap’ runs the risk of widening the achievement gap, denying hardworking students the benefit of a technology-enriched education.”

    The US government has been actively seeking ways to connect more low-income households to the Web. In June, the Federal Communications Commission voted to advance a proposal that would allow qualifying households to use their $9.25-per-month Lifeline subsidy on either phone or broadband service.

    “Broadband has gone from being a luxury to a necessity,” FCC Chairman Tom Wheeler said at the time.

    Reply
  26. Tomi Engdahl says:

    Why Is There Liquid Nitrogen On the Street Corner?
    http://hackaday.com/2015/07/21/why-is-there-liquid-nitrogen-on-the-street-corner/

    Any NYC hackers may have noticed something a bit odd this summer while taking a walk… Giant tanks of the Liquid Nitrogen have been popping up around the city.

    There are hoses that go from the tanks to manholes. They’re releasing the liquid nitrogen somewhere…

    Luckily, we now have an answer. Popular Science writer [Rebecca Harrington] got to investigate it as part of her job. As it turns out, the liquid nitrogen is being used to pressurize the cables carrying our precious phone and internet service in NYC. The cables have a protective sheath covering them, but during construction and repairs, the steam build up in some of the sewers can be too much for them — so they use liquid nitrogen expanding into gas to supplement the pressurized cables in order to keep the them dry. As the liquid nitrogen boils away, it expands 175 times which helps keep moisture out of the cables.

    Sounds expensive, but apparently liquid nitrogen was the cheapest option

    Why Are There Liquid Nitrogen Canisters On NYC Sidewalks?
    http://www.popsci.com/those-nitrogen-canisters-nyc-streets-are-keeping-your-internet-cables-cool

    Reply
  27. Tomi Engdahl says:

    Ted Johnson / Variety:
    FCC Chairman and Justice Department say they will approve AT&T Direct TV merger, with conditions on data caps, interconnection agreements, and fiber buildout

    FCC Nears Approval of AT&T-DirecTV Merger
    http://variety.com/2015/biz/news/att-directv-fcc-merger-1201545108/

    FCC Chairman Tom Wheeler said that he is recommending approval of the AT&T merger with DirecTV with conditions, including provisions to prevent the combined company from discriminating against online video competition.

    The Department of Justice also has reviewed the deal and will not challenge it.

    “After an extensive investigation, we concluded that the combination of AT&T’s land-based internet and video business with DirecTV’s satellite-based video business does not pose a significant risk to competition,”

    Reply
  28. Tomi Engdahl says:

    Ina Fried / Re/code:
    AT&T adds 2.1M customers, mostly from connected cars and tablets in Q2, meets expectations by reporting revenue of $33B

    AT&T Adds 2.1 Million Customers, Though Mostly Cars and Tablets
    http://recode.net/2015/07/23/att-adds-2-1-million-customers-amid-intense-carrier-battle/

    AT&T on Thursday said it added 2.1 million customers over the past quarter, though more than half of the new additions were from connected cars, and another 600,000 came from tablets and other non-phone devices.

    AT&T said it has largely made the shift to unsubsidized smartphones, with more than three-quarters of customers on one of its shared data plans and two-thirds of customers adding a smartphone now doing so through the AT&T Next device financing. In all, the company said 96 percent of customers are on either a family plan, a business plan or a shared data plan.

    However, AT&T’s customer gains came outside the most lucrative area of core postpaid phone users, with the company gaining ground among prepaid phone customers and those connecting a car or tablet to the network.

    Reply
  29. Tomi Engdahl says:

    Brian Fung / Washington Post:
    Comcast now has more broadband customers than it does TV subscribers, 22.5M vs 22.3M
    http://www.washingtonpost.com/blogs/the-switch/wp/2015/07/23/comcast-has-finally-become-an-internet-first-company/

    Reply
  30. Tomi Engdahl says:

    AT&T Unveils Nationwide Combo TV And Wireless Package With DirecTV
    http://deadline.com/2015/08/att-directv-nationwide-wireless-phone-tv-package-1201489859/

    AT&T watchers have been waiting to see what it will do with DirecTV now that the companies are united, turning the telco into the world’s largest pay TV provider. That makes the announcement today of what it calls the first nationwide combo wireless phone and TV package interesting, even if it was predictable.

    AT&T hopes to lure customers from cable and Dish Network by offering promotional bargains beginning August 10 to people who are new to DirecTV, its wired U-verse service (in 21 states), or its wireless phone platform — and subscribe to both TV and phone. Satellite and phone companies have jointly sold their services, but AT&T is the first to do so across the country with phone and TV under the same corporate roof.

    This is “the first of many planned moves to enable our customers to enjoy a premium entertainment experience almost anywhere,” AT&T Entertainment and Internet Services Chief Marketing Officer Brad Bentley says.

    AT&T says the new plans will simplify TV and wireless phone purchasing, and offer a bargain. For example, new DirecTV and U-verse customers can pay $200 a month for basic service (DirecTV Select or U-verse U-Family) to four DVR-equipped receivers, and have four wireless lines with unlimited talk and text, and 10GB of shareable wireless data.

    But even a simplified offering can become complicated when you look at the different levels of available TV and wireless broadband services — and the fine print.

    Reply
  31. Tomi Engdahl says:

    James Cook / Business Insider:
    Sources: Apple is conducting private MVNO trials in US, in talks with European operators, but service might be 5 years away — Apple is in talks to launch its own virtual network service in the US and Europe — Apple is in talks to launch a mobile virtual network operator (MVNO)

    Apple is in talks to launch its own virtual network service in the US and Europe
    Read more: http://uk.businessinsider.com/apple-in-talks-to-launch-an-mvno-in-the-us-and-europe-2015-8?op=1?r=US&IR=T#ixzz3hmnSEvet

    Here’s how an Apple MVNO will work: Instead of paying your carrier every month, you will pay Apple directly for data, calls, and texts. Apple then provides you with everything you used to get from your carrier, but the Apple SIM switches between carriers to get the best service. The telecoms companies auction capacity to Apple so it can run the service.

    There is no guarantee Apple’s service will launch beyond a test phase, and if it does, it will not roll out anytime soon. Telecoms sources say Apple is looking long-term with its MVNO and could take at least five years to fully launch the service.

    Apple is preparing to launch a voicemail service that will use Siri to transcribe your messages
    Read more: http://uk.businessinsider.com/apple-siri-voicemail-transcription-service-2015-8?op=1?r=US&IR=T#ixzz3hmnMQzHc

    Reply
  32. Tomi Engdahl says:

    Roger Cheng / CNET:
    Verizon eliminates subsidized phones and contracts, offers 4 plans with 1-12GB of data for $30-80 per month and a $20 per month smartphone access fee

    Verizon kills off service contracts, smartphone subsidies
    http://www.cnet.com/news/verizon-kills-off-service-contracts-smartphone-subsidies/

    In a radical shift, the company will only offer new plans that require customers to pay for their own smartphones. Also, device access fees and buckets of data remain.

    Verizon is shaking up how you pay for your wireless service.

    Verizon Wireless on Friday introduced a set of new data plans that require customers to pay for their smartphone in monthly installments or buy it outright. The new plans go into effect August 13.

    It’s a radical change in how Verizon operates and signals a broader shift away from smartphone subsidies and service contracts. Customers are increasingly paying for their devices in exchange for lower service fees — a trend started by T-Mobile two years ago. The change has resulted in heightened awareness of their smartphone and service costs.

    Reply
  33. Tomi Engdahl says:

    Library books counterfeit cabling
    http://www.cablinginstall.com/articles/slideshow/photo-of-the-day/new-image-gallery/may-2015-homepage-photos.html?cmpid=EnlCIMAugust32015&eid=289644432&bid=1139962

    “So out of curiosity, I pulled out just a bit of wire. Didn’t feel right. Stripped it down to copper conductor. Scratched it with scissors. Aluminum.

    Had to give him the bad news.

    I feel that every once in awhile the IT profession needs to add something to the “Certification” they take and promote on the wall like, ‘I don’t know anything about structured cabling.’

    Reply
  34. Tomi Engdahl says:

    Twisted-pair cable termination is an evolving practice
    http://www.cablinginstall.com/articles/print/volume-23/issue-6/features/installation/twisted-pair-cable-termination-is-an-evolving-practice.html?cmpid=EnlCIMAugust32015&eid=289644432&bid=1139962

    Tools have made the termination process easier on many installations, while latest-generation cabling presents challenges anew.

    Terminating twisted-pair copper cable is a fundamental-perhaps the fundamental-practice among cabling technicians everywhere. In some ways, twisted-pair termination is the skill by which cabling installers may be measured; that is the case in more than one industry competition. The BICSI Cabling Skills Challenge (www.bicsi.org/skillschallenge/) includes an event titled “Copper Cable Terminations, Firestopping, Grounding and Bonding.”

    The punchdown tool can be found in every cabling technician’s belt, and is available from a number of suppliers.

    Jack termination

    For more than a decade manufacturers have offered punchdown tools that let installers simultaneously terminate all eight wires of a twisted-pair cable to a jack. Today in 2015, several different versions of these multi-conductor termination tools are available in slightly different forms.

    The paper concludes with an acknowledgement wrapped in an opportunity: “In the real world, you rarely get a chance to work under best-case conditions where you can consistently achieve world-record termination times. You’re squeezed into tight closets, hunched under desks, or worse. Yet the point still remains that the faster you can terminate outlets, the more profitable you can be on the project. And relatively speaking, a termination method that is twice as fast on a nice, clean workbench will be twice as fast on the jobsite.”

    Single-termination and tool-less modular jacks may very well be overwhelmingly popular as structured cabling systems forge into the future. But the ubiquity of connectivity hardware like 66 blocks in networks even today means that the trusted punchdown tool is destined to be found in cabling technicians’ tool belts for a long time to come. ::

    Reply
  35. Tomi Engdahl says:

    Corroded abandoned cable causes underground explosion
    http://www.cablinginstall.com/articles/2015/07/ri-beach-explosion-corroded-abandoned-cable.html

    Researchers have determined the cause of a mysterious underground explosion at a Rhode Island beach was hydrogen combustion caused by corrosion from an abandoned cable. The copper cable had been used by the United States Coast Guard and was abandoned beneath Salty Brine Beach in Rhode Island. On July 11 beachgoers were startled and one suffered broken ribs when the explosion occurred beneath the sandy surface.

    On Friday, July 24, CBS News reported that the explosion “was very likely caused by the combustion of hydrogen gas built up because of a corroded copper cable under the sand …

    Abandoned cable removal a dogged challenge for all
    July 1, 2007
    http://www.cablinginstall.com/articles/print/volume-15/issue-7/features/installation/abandoned-cable-removal-a-dogged-challenge-for-all.html

    Unfortunately, for everyone involved, ignoring the issue won’t make it go away.

    For just about a half- decade, the National Electrical Code has included language requiring the removal of cable from building pathways when that cable is not in current use or tagged for future use. The NEC defines this type of cable as “abandoned,” and mandates its removal, though not its method of removal.

    Over the past five years, this and other cabling-trade publications have chronicled the development and modification of abandoned-cable removal requirements. More recently, trade publications focused on the profession of real-estate management have turned their attention to the topic as well, and with good reason. The glut of abandoned cables inside commercial office buildings today exists, at least in part, because of the transient nature of occupancy in such buildings.

    Traditionally, when a tenant moves out of a building, it would leave the cabling in place-sometimes, several generations of it (e.g., Category 3, 5, 5e for some long-term tenants). And most often, a new tenant would install a new structured cabling system rather than rely on the cabling left by the previous occupant.

    “BOMA International recommends that building owners and managers survey their buildings to identify unused cable. If such wires exist, members should identify the wiring by its rating (riser rated “CMR,” plenum rated “CMP”) and its use (communications, alarm, security, etc.). The NEC 2002 and 2005 include language that allows some cabling to be retained if it is tagged for future use as long as it meets the permitted use criteria specified for cable installations (i.e., minimum of “CMR” and/or “CMP”). Any cable that does not meet the permitted use specifications should be removed.

    The BOMA paper continues: “Your leases should clearly state that tenants must remove any cabling that is abandoned during the term of their tenancy, and/or your license agreements should require service providers to remove all wires upon the termination of the contract. We recommend that you review your leases and license agreements to ascertain exactly who was responsible for the installation and/or abandoning of the cabling and whether you have recourse to recover any of the funds needed to remove the wire. Next, make any amendments necessary if you are not already protected by these agreements.”

    Reply
  36. Tomi Engdahl says:

    5G base station architecture: The potential semiconductor solutions
    http://www.edn.com/design/analog/4439931/5G-base-station-architecture–The-potential-semiconductor-solutions?_mc=NL_EDN_EDT_EDN_weekly_20150723&cid=NL_EDN_EDT_EDN_weekly_20150723&elq=4ee4d66839ba4ff2b0700e8de3c6af91&elqCampaignId=24073&elqaid=27191&elqat=1&elqTrackId=0eedc3b35f74448b9f82742dca7ed8ae

    For many, 5G is too far away to think about right now; to others 5G is too complex or too aggressive in its goals. Be sure, my friends, that 5G will be upon us like a pouncing tiger, sooner than you think.

    I have spoken to many of the key semiconductor companies that I felt had the best potential to create 5G solutions based upon their present architectures and future evolution and advancements in their technology, processes and architectures. I present the following as a discussion as well as an informative view of what may be to come on the road to 5G development. Of course, these companies want to keep their roadmaps close to the vest, but there are some good insights in what they expressed to me below.

    Efficient Power Conversion

    I discussed 5G with Alex Lidow, CEO and co-founder of Efficient Power Conversion, who said:

    As the consumer demands more data wirelessly, the industry needs to move for a 4G to a 5G transmission technology. Unfortunately, as we go to higher data transmission rates there is an exponential and unacceptable decrease in the efficiency of the transmitter. This decrease can be fixed using a technology called envelope tracking, which has already been adopted in newer 4G/LTE base stations as well as cellular phones. Envelope tracking in base stations requires the high speed, high power, and high voltages that are only available using GaN technology. Today this is one of the largest markets for GaN transistors, and will hold that position for the next several years.

    I fully expect that eGaN technology will be one of the most important solutions to power efficiency in base station infrastructure for 5G; the peak-to-average ratios will be worse in 5G. Envelope tracking is obvious right now as one way eGaN power transistors will do this, but over the next 3 to 5 years more applications will emerge as eGaN technology progresses.

    The power transistor revolution needed to enable 5G networks comes from gallium nitride.

    Where we are now and why?

    Data Centers: Constantly looking to fit more power in the same rack with the same cooling –> higher density, higher efficiency

    Data Centers and Base Stations: Total Cost of ownership –> reduced electricity costs –> higher efficiency at all loads

    Base Stations: Cramming more capacity/more channels in a fixed power infrastructure –> higher efficiency, higher density, and envelope tracking

    5G goals:

    1,000 X in mobile data volume per geographical area reaching a target ≥ 10 Tb/s/km2
    1,000 X in number of connected devices reaching a density ≥ 1M terminals/km2
    100 X in user data rate reaching a peak terminal data rate ≥ 10Gb/s
    1/10 X in energy consumption compared to 2010
    1/5 X in end-to-end latency4 reaching 5 ms for e.g. tactile Internet and radio link latency reaching a target ≤ 1 ms for e.g. Vehicle to Vehicle communication
    1/5 X in network management OPEX
    1/1,000 X in service deployment time reaching a complete deployment in ≤ 90 minutes

    It is hard (I would say impossible) to imagine that these goals can be met without an absolute revolutionary change to power management.

    Reply
  37. Tomi Engdahl says:

    New house brings Internet questions & wireless links
    http://www.edn.com/electronics-blogs/benchtalk/4439898/New-house-brings-Internet-questions—wireless-links?_mc=NL_EDN_EDT_EDN_today_20150714&cid=NL_EDN_EDT_EDN_today_20150714&elq=cc618ac7fadc4995b4e3db1c782c625f&elqCampaignId=23908&elqaid=27006&elqat=1&elqTrackId=e5b6313e5d1248539077fe10625707e9

    One of the most important criteria in buying our new (old) rural property was of course good Internet service. DSL is apparently available (to the extent that one can trust the phone company), and even fibre might be coming soon, but the previous owner had something else: a wireless system. Would this be enough?

    Turns out this isn’t a bad system. It won’t break any speed records, but typical Speedtest numbers are 40ms ping, 2Mb/s down, and 0.7Mb/s up. I’d like more, but the price is reasonable, it’s been reliable so far (three months), and I don’t have to deal with the phone company!

    Once this workable system was verified, my thoughts turned to how we’d link to a second building on the property, about 200m away

    I did notice that a company called EnGenius kept popping up with promising-looking equipment, so I admitted defeat and called their tech support line. Amazingly, I talked with a knowledgeable person, who confirmed that I could use a pair of their ENH202′s to create a long-distance link.

    But the 200m wireless link is only half the story. How do I connect them to form a complete system? That’s when my real troubles began.

    The EnGenius boxes support multiple modes (Access Point, Client Bridge, WDS, and Client Router) and sub-modes, so even figuring out which to use was making my head hurt.

    Many sections simply reiterate the obvious

    Long story short, I finally found myself on my neighbour’s Internet connection! We each had an ENH202 on our LAN, and had assigned them static IP addresses. Great! System proven

    Now, I was using the AirPort as my router, with the D-Link acting as the wireless access point in the remote location. Something was definitely not working as planned.

    Reply
  38. Tomi Engdahl says:

    Simplify communication system design while increasing available bandwidth
    http://www.edn.com/design/analog/4439915/Simplify-communication-system-design-while-increasing-available-bandwidth?_mc=NL_EDN_EDT_EDN_analog_20150716&cid=NL_EDN_EDT_EDN_analog_20150716&elq=e8b5e199f6a54057a45e030fec851763&elqCampaignId=23927&elqaid=27030&elqat=1&elqTrackId=ea27d2f274cf4b79b7905311a738c251

    In modern communications systems, the more bandwidth that is available, the more information that can be transmitted. As the requirements for bandwidth increase, the need for faster and higher linearity ADCs (analog-to-digital converters) and amplifiers also increases. With increased bandwidth, more noise is introduced into the system which can overpower low level signals of interest. This places added requirements on the ADC and amplifier to have low noise. Also with increased bandwidth, the linearity of the system becomes more critical, especially in the presence of a strong interferer which can block other signals of interest. One approach to resolve these issues is to use a high speed, high resolution ADC with an equally fast amplifier driving it. This will allow better sensitivity and selectivity of the receiver, and will ultimately improve the quality of the system.

    There are several fundamental design trade-offs in any communications system. Bandwidth, spurious free dynamic range (SFDR) and sensitivity are all important factors in a communications system, but are difficult to achieve with a single solution. The usable bandwidth of a system is heavily dependent on the sample rate of the ADC.

    A major challenge in high speed communications design is maintaining good SNR while maintaining wide bandwidth. Designers face trade-offs among the various approaches. One approach is to use slower sample rates and higher order filters to attenuate the out-of-band noise before the analog input signal is sampled. This requires a complex filter network with several stages of attenuation, requiring a significant number of components. High order filters also tend to ring in response to the sampling glitches of the high speed ADC. This problem is solved by using an ADC with a faster sample rate. This simplifies the analog filter design.

    A wider bandwidth allows more noise to be sampled by the ADC, increasing the need for an ADC with a high SNR.

    Reply
  39. Tomi Engdahl says:

    Fast active optics minimize power consumption
    http://www.edn.com/electronics-products/other/4439921/Fast-active-optics-minimize-power-consumption?_mc=NL_EDN_EDT_EDN_today_20150715&cid=NL_EDN_EDT_EDN_today_20150715&elq=5b7289a9d0764c72ac2f8a4a888bf55d&elqCampaignId=23931&elqaid=27034&elqat=1&elqTrackId=4b57042e3dc843d69ba2d4849d8425b6

    Coolbit optical engines from TE Connectivity convert data from electrical signals to optical signals and is the driving force behind the company’s 100G, 300G, and 400G AOC (active optic cable) assemblies.

    Each Coolbit optical engine uses 25G VCSEL (vertical cavity surface emitting laser) and PIN (p-doped intrinsic n-doped) devices, a transimpedance amplifier, and a driver IC. Coolbit-based products include 100G QSFP28 AOCs, 100G QSFP28 transceivers, 300G mid-board optical modules, and 400G CDFP AOCs.

    According to the manufacturer, QSFP28 modules perform at less than 1.5 W/transceiver and help communication systems achieve up to 60% more power savings than existing solutions.

    Reply
  40. Tomi Engdahl says:

    NFV vendors of the world unite! Or maybe work together?
    Interoperability is top customer concern, says Light Reading
    http://www.theregister.co.uk/2015/08/11/nfv_interoperability_validation_testing/

    Light Reading, the insider’s publication for the comms industry, is calling on vendors to join the world’s first interoperability tests for Network Function Virtualization (NFV).

    The online magazine has teamed up with the European Advanced Networking Test Center, based in Berlin, as its test partner to evaluate “which products can be installed together to support next-generation virtualized New IP networks,” Light Reading’s CEO and co-founder Steve Saunders writes in a blog post.

    According to Saunders, NFV is the fundamental enabler of next generation communications networks – and incumbent vendors are all onboard with the concept.

    But interoperability is a huge concern for big telcos and other customers, and they need to know that products from different vendors will work together. Otherwise they are vulnerable to the bad old days of proprietary lock-in.

    “Without interoperability, service providers won’t have the freedom to choose the best virtual network function (VNF) from one company and deploy them with core NFV infrastructure (NFVi) from another,” he writes.

    Light Reading is optimistic that vendors will accept its formal invitation to join the independent validation programme.

    Reply
  41. Tomi Engdahl says:

    Bunitu botnet crooks sell your unencrypted VPN traffic for £££
    Unknowing proxies help zombie army lurch forward
    http://www.theregister.co.uk/2015/08/11/bunitu_botnet_vpn_scam/

    Cyber-crooks behind the Bunitu botnet are selling access to infected proxy bots as a way to cash in from their network.

    Users (some of whom may themselves be shady types, as explained below) who use certain VPN service providers to protect their privacy are blissfully unaware that back-end systems channel traffic through a criminal infrastructure of infected computers worldwide.

    Not only that, but all traffic is also unencrypted – defeating the main point of using a VPN service.

    The lack of encryption gives consumers a false sense of security while simultaneously leaving their traffic open to interception, or worse yet, man-in-the-middle and traffic redirection attacks.

    The cheap and nasty VPN scam was uncovered by security research from anti-virus firm Malwarebytes and ad-fraud-fighting outfit Sentrant. The two firms originally began investigating the botnet in the belief that ad-click fraud was its main source of illicit income, before realising that dodgy VPN services seemed to be the main fraud in play.

    In particular, a VPN service called VIP72 was heavily involved with the Bunitu botnet and its proxies. VIP72 appears to be a top choice for cyber-criminals, as referenced on many underground forums, and a particular favorite with Nigerian 419 scammers, among others.

    Reply
  42. Tomi Engdahl says:

    Indian carriers forced to send TXT for every 10 megabyte download
    Not everyone in the next billion can afford lots of data
    http://www.theregister.co.uk/2015/08/11/indian_carriers_forced_to_send_txt_for_every_10_megabyte_download/

    India has decided its mobile carriers must inform subscribers every time they download ten megabytes of data.

    New rules (PDF) posted last week also contain a new provision that will force carriers to switch off mobile data access on receipt of a single text message.

    Not all of India’s mobile carriers have a national footprint, so roaming charges can be incurred within the nation’s borders. Opting out of data is therefore a handy tool. Some carriers have also indulged in sharp-ish practices, either making it very hard to opt out of data or not being particularly forthcoming when subscribers approach or exceed their download allowances.

    All carriers must now send their customers notice every time they get through ten megabytes of data. For those on capped download plans, there’s an TXT or Unstructured Supplementary Service Data (USSD) message coming when they hit fifty percent, ninety per cent or one hundred per cent of their download allowances.

    There’s also a new provision that “No service provider shall activate or deactivate the data service on the Cellular Mobile Telephone connection of a consumer without his explicit consent.” TXTing 1925 will be the method of opting in or out.

    Reply
  43. Tomi Engdahl says:

    NIWeek: Inside the Nokia/NI 5G system
    http://www.edn.com/electronics-blogs/test-cafe/4440101/NIWeek–Inside-the-Nokia-NI-5G-system?_mc=NL_EDN_EDT_EDN_today_20150810&cid=NL_EDN_EDT_EDN_today_20150810&elq=2a67c0ad14424b41a2492856bcec58d8&elqCampaignId=24283&elqaid=27424&elqat=1&elqTrackId=a2a799b112fc44ed86a4a21dd93e5ac8

    It’s August. It’s Texas. It’s unbelievably hot. It’s full of modular instruments and LabView. Yes, it’s my annual trek (along with 3200+ others) to NIWeek in Austin, Texas, a conference hosted by National Instruments.

    This year I had a special interest in 5G communications. Frequent readers of Test Cafe blog know that I have recently written about 5G, and why modular instruments are well positioned architecturally to address the challenges of 5G. Architecturally yes, but without the mmWave (millimeter wave) instruments needed for the highest frequency 5G microwave bands. My most recent column made an unequivocal prediction: We will see modular microwave, and mmWave in particular, within 18 months. That is, by the end of 2016.

    As part of the Day 2 Keynote presentation, Nokia and NI demonstrated a “5G” 10Gb/s wireless link operating at 73GHz, architecturally based on PXI and LabView. For completeness, I should mention that Keysight Technologies had just announced a 5G channel sounding reference system, based on PXI and AXIe, a week earlier.

    You won’t find any press releases or data sheets about the NI products behind the Nokia system. They are yet to be released as generally available products. However, NI was more than happy to showcase the system and describe how it operates.

    The key thing to recognize about this system is that it is an actual prototyping system. That is, it is not merely instrumentation to verify other 5G hardware, it is the actual prototype hardware used by Nokia.

    The system is a 2×2 MIMO system. That is, it has two transmit antennas and two receive antennas.

    The system operates at 73GHz with 2GHz of spectrum. The wide spectrum availability is one of the attractive reasons of going to mmWave frequencies for 5G.

    The transmitter has a similar head design, also with a vertical and horizontal polarized antenna. Each supports a 5Gb/s stream. Together, they achieve the 10Gb/s banner spec for 5G.

    Dr. Amitava Ghosh from Nokia describes Nokia efforts in 5G. They had shown a 2.3Gb/s a year ago, and now demonstrated a 10Gb/s system, which they claimed as the fastest mmWave demo to date.

    While the live demo on the stage spanned approximately 20 meters from the base station to the handset (both emulated by PXI and LabView FPGA), Nokia has shown the system capable of 200 meters, a key benchmark in the race to 5G.

    The base station (transmit) side had a single PXI chassis, driving the transmit head. The handset (receive) side had two PXI chassis processing the signals from the receive head.

    The main receive-side chassis was essentially a LabView FPGA based supercomputer. It held 13 LabView FPGA modules, performing the baseband processing. Some were standard FlexRIO modules, others are custom modules that added direct links between the modules, circumventing PXI’s PCIe backplane.

    LabView generated well in excess of 90% of the code, all the measurement science, while some glue logic at the system level was done manually.

    ADCs (analog to digital data converters) sat alongside the FPGA modules. These are digitizing data from the IF module, essentially generating two IQ data streams (one for each MIMO channel) that would be processed by the FPGA modules.

    The IF modules sat in the second PXI chassis.

    Truchard also emphasized the need for fast prototyping tools for 5G. The LabView architecture, where the embedded FPGA algorithms can be quickly modified by researchers at a high level as they learn about the system performance, delivers this quick prototyping environment. Indeed, Nokia claims that their speed jump from 2.3Gb/s a year ago to 10Gb/s today was largely enabled by the NI tools.

    A word of caution to readers. While this toolset offers some impressive prototyping capabilities, it’s not actually on the market yet, and NI never volunteered when it would be. Second, while it exhibited 2×2 MIMO, massive MIMO systems are expected to be much larger. Some phased arrays are aiming at 256 elements on the base station side. Using or emulating these arrays will be essential to beamforming algorithm design. Third, like 3G and 4G before it, 5G will require a flotilla of measurement solutions, not a single application. Physical, protocol, and network layers will all have to be defined, designed, and verified. Indeed, 5G may need more solutions due to the diversity of frequency bands and the exploding combinations of inter-RAT (handovers between networks).

    Reply
  44. Tomi Engdahl says:

    Wi-Fi in the sky
    http://www.edn.com/electronics-blogs/brians-brain/4440095/Wi-Fi-in-the-sky?_mc=NL_EDN_EDT_EDN_today_20150811&cid=NL_EDN_EDT_EDN_today_20150811&elq=96872580c5324ac1988d77b0fdef34ae&elqCampaignId=24300&elqaid=27452&elqat=1&elqTrackId=00c6080c0d8345cf956af0c0921ef014

    Back at the end of 2013, I wrote about my mostly-positive experiences with the pervasive network connectivity provided by Southwest Airlines (In-Flight Wi-Fi) and Comcast (Xfinity Wi-Fi) services. On a recent flight from San Jose to San Diego, however, the Southwest In-Flight Wi-Fi bandwidth and latency were so bad as to render the service essentially unusable (to the airline’s credit, they promptly refunded my money after I sent them a post-flight complaint).

    My connectivity troubles didn’t dissipate once I got to San Diego.

    Sometimes I couldn’t connect to the router at all. Other times, it wouldn’t give me a DHCP IP address assignment. If I waited a few minutes, I could usually get back online again … but only after watching another ad first … and once again only for a few seconds-to-minutes.

    Connectivity didn’t get any better once we got to the hotel. When its router was “up,” both it and the broadband connection feeding it were speedy and responsive. But it unfortunately wasn’t “up” very much;

    All hasn’t been bad in the airport world of late. Free Wi-Fi at both Reno and San Jose Airports was problem-free; I didn’t even need to watch an ad or otherwise log in first in order to use it. But fast food restaurants were hit-and-miss

    The fundamental problem here is that broadband service was offered and in fact aggressively promoted by each merchant. Had Wi-Fi not been available, I would have figured out some other way to get online, or done without Internet access. But since it was offered, I counted on it, and its unreliability was all the more frustrating as a result. The lesson is the same here as it is with any other technology product: it’s better to under-promise and over-deliver than the converse.

    Reply
  45. Tomi Engdahl says:

    Gigabit Google? We’re getting ready for 10 gigabits says Verizon
    NG-PON2 passes test, Verizon preps for RFPs by year-end
    http://www.theregister.co.uk/2015/08/12/10_gigabits_says_verizon/

    Verizon has upped the ante in the fibre-to-the-home business, plugging some test kit into its network to show off 10Gbps.

    The test was a proof-of-concept for what’s called NG-PON2 – next generation passive optical network – an IEEE roadmap that plots GPON (gigabit PON) upgrades with a minimum of new kit.

    In the test, Verizon sent 10Gbps streams from a Framingham central office to a FiOS home customer three miles (4.8 km) away, and to a nearby business.

    The test also demonstrated backwards-compatibility of kit supporting the NG-PON2 standard, since the fibres used in the test were also carrying live GPON traffic at the time.

    The kit the company used – from Cisco and PT Inovacao, the technology R&D arm of Portugal Telecom – currently maxes out at 10 Gbps, but Verizon’s announcement notes that NG-PON2 also supports wavelength division multiplexing (WDM).

    Switching on extra wavelengths, the company says, would support download speeds up to 80Gbps. The optical line terminal (OLT) used in the test ran four wavelengths, each able to support 10G/2.5G customer connections.

    NG-PON2: a primer

    The IEEE has an NG-PON2 paper here [PDF].
    http://www.ieee1904.org/events/2015_02_joint_session/js_1502_fsan.pdf

    That paper explains that the project, initiated in 2010, envisages each operator’s optical port should support at least 64 customers (and eventually as many as 256), and should have a reach of at least 40 km using passive technologies (that is, without needing active kit between the exchange and the customer).

    Optical network units – the customer’s broadband modem – are configurable to tune into whichever wavelength is carrying their traffic.

    For highest-capacity services, the NG-PON2 standard supports point-to-point WDM from network to customer. However, the greatest benefit of the technology for carriers will be that it also supports time division multiplexing on the WDM channels, so capacity can be shared between a number of consumers.

    Point-to-point implementations will use either 1603-1625nm or 1524-1625nm wavelengths

    Reply
  46. Tomi Engdahl says:

    Full duplex! Bristol boffins demo Tx and Rx on the same frequency AT THE SAME TIME
    It’s all down to silicon chippery crunching the waves
    http://www.theregister.co.uk/2015/08/13/full_duplex_radio_tech_bristol_uni/

    Transmitting and receiving on the same radio frequency, at the same time, has been demonstrated by the UK’s University of Bristol in a new YouTube video.

    The system being used is running 900MHz and 1800MHz with exactly the same equipment.

    There are a number of companies working on this technology, called full duplex radio, a technology traditionally viewed as being extremely difficult to implement – if not impossible.

    However, what has now seemingly made it possible are improvements in processing, which allows the noise of the transmitted signal to be subtracted from the received signal.

    One of the first patents on full duplex was filed by UK-based electronics, defence and telecoms company Plessey in 1980, for a combat radio repeater using the tech. This was called “Groundsat” and was used by the British Army. There was also a patent filed by Bristol University in the 1990s.

    In theory, full duplex does not affect the propagation, although one expert we spoke to said that there could be limitations from second-order effects.

    Leo Laughlin, an assistant at the University of Bristol’s Faculty of Engineering, told The Register: “Full duplex does not effect the propagation in any fundamental way; what you have going ‘over the air’ are the two signals (uplink and downlink) following the exact same propagation path, but traveling in opposite directions.”

    “However, achieving full duplex in a practical cellular system may only be practical in smaller cells (although perhaps no smaller than today’s microcells),” he added. “This is because to achieve any throughput gain from full duplex you must suppress the interference to an acceptably low level, and this level depends on the difference between the transmit and receive signal powers.”

    “I think full duplex is likely to be a technology which is deployed to increase capacity and data rates in high density small cell areas of the network, and is not likely to be used for wireless access in macrocells,” although it could be used for backhaul in both types of cell.

    In theory it should be happy at the frequencies above 6GHz, which are being touted for 5G, as full duplex is frequency agnostic.

    But this is theory and it’s not yet been tried; the team acknowledges there is lots more work to be done in this space.

    The Bristol team demonstrated how the interference from the transmit can be reduced by 50dB, taking it down to the level of general background noise.

    “Our prototype uses Electrical Balance Isolation, which requires just one antenna and can be implemented on chip. Kumu’s system has some advantages over our own, particularly that it can easily cancel signal imperfections introduced by the non-ideal characteristics of the transmitter chain,” said Laughlin.

    “However, our prototype uses low-cost technologies which can be implemented on-chip, making it more suitable for implementing full duplex in mobile device form factors,” he added.

    Reply
  47. Tomi Engdahl says:

    Embedded Cisco Routers
    http://www.elma.com/en/products/systems-solutions/application-ready-platforms/cisco-routers/

    Cisco certified rugged, mobile routing and switching plus the packaging and integration know how of Elma Electronic.

    As a qualified Solution Technology Integrator (STI partner) for Cisco, Elma now offers a range of rugged,embedded computing systems designed, for deployment in harsh environments, incorporating the reliability and unmatched performance of Cisco’s 5915 embedded router and 2020 switch for mobile applications. Featuring Cisco IOS® Software and complemented by Cisco Mobile Ready Net capabilities each system enables highly secure data, voice, and video communications to stationary and mobile network nodes across wired and wireless links.

    Systems ranging from ultra rugged to light industrial meet the critical need for on-demand network connectivity for industrial, commercial, military, homeland-security, and emergency-response applications.

    Reply
  48. Tomi Engdahl says:

    High-precision timing networks

    All the electronics need accurate clock signal in order to be able to operate at all. Silicon Labs has now introduced a new clock circuit network routers. Si5348 chip supports the new data packets SyncE- and IEEE 1588 syncronization technologies.

    In many network devices, the old clock signal circuits are based Stratum 3 chips. These circuits are not optimized for size, power consumption or performance.

    Silabs of the Si5348 chip is the smallest and most low-power clock signal producer. Size of the circuit is 50 percent lower power consumption and 35 percent of pihimpi than traditional solutions. Still, the color values ​​(jitter) are up to 80 percent smaller. The result is a more stable clock signal, which means a more reliable operating power adapters.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3195:erittain-tarkka-ajastus-verkkoihin&catid=13&Itemid=101

    Reply
  49. Tomi Engdahl says:

    Synchronous Ethernet
    https://en.wikipedia.org/wiki/Synchronous_Ethernet

    Synchronous Ethernet, also referred as SyncE, is an ITU-T standard for computer networking that facilitates the transference of clock signals over the Ethernet physical layer. This signal can then be made traceable to an external clock.

    The aim of Synchronous Ethernet is to provide a synchronization signal to those network resources that may eventually require such a type of signal. The Synchronous Ethernet signal transmitted over the Ethernet physical layer should be traceable to an external clock, ideally a master and unique clock for the whole network. Applications include cellular networks, access technologies such as Ethernet passive optical network, and applications such as IPTV or VoIP.

    Unlike time-division multiplexing networks, the Ethernet family of computer networks do not carry clock synchronization information. Several means are defined to address this issue. IETF’s Network Time Protocol, IEEE’s 1588-2008 Precision Time Protocol are some of them.

    SyncE was standardized by the ITU-T, in cooperation with IEEE, as three recommendations:

    ITU-T Rec. G.8261 that defines aspects about the architecture and the wander performance of SyncE networks
    ITU-T Rec. G.8262 that specifies Synchronous Ethernet clocks for SyncE
    ITU-T Rec. G.8264 that describes the specification of Ethernet Synchronization Messaging Channel (ESMC)

    SyncE architecture minimally requires replacement of the internal clock of the Ethernet card by a phase locked loop in order to feed the Ethernet PHY.

    Extension of the synchronization network to consider Ethernet as a building block (ITU-T G.8261). This enables Synchronous Ethernet network equipment to be connected to the same synchronization network as Synchronous Digital Hierarchy (SDH). Synchronization for SDH can be transported over Ethernet and vice versa.

    TU-T G.8262 defines Synchronous Ethernet clocks compatible with SDH clocks. Synchronous Ethernet clocks, based on ITU-T G.813 clocks, are defined in terms of accuracy, noise transfer, holdover performance, noise tolerance and noise generation. These clocks are referred as Ethernet Equipment Slave clocks. While the IEEE 802.3 standard specifies Ethernet clocks to be within ±100 ppm. EECs accuracy must be within ±4.6 ppm.

    In SDH, the Synchronization Status Message (SSM) provides traceability of synchronization signals and it is therefore required to extend the SSM functionality to Synchronous Ethernet to achieve full interoperability with SDH equipment.

    A general requirement for SyncE was that any network element (NE) should have at least two reference clocks, and in addition, Ethernet interfaces must be able to generate their own synchronization signal in case they lose their external reference. If such is the case, it is said that the Ethernet node (EN) is in holdover. The synchronous signal must be filtered and regenerated by phase locked loop (PLL) at the Ethernet nodes since it degrades when passing through the network.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*