Telecom and networking trends 2013

One of the big trends of 2013 and beyond is the pervasiveness of technology in everything we do – from how we work to how we live and how we consume.

Worldwide IT spending increases were pretty anemic as IT and telecom services spending were seriously curtailed last year. It seems that things are going better. Telecom services spending, which has been curtailed in the past few years, only grew by a tenth of a point in 2012, to $1.661tr, but Gartner projects spending on mobile data services to grow enough to more than compensate for declines in fixed and mobile voice revenues. Infonetics Research Report sees telecom sector growth outpacing GDP growth. Global capital expenditure (capex) by telecommunications service providers is expected to increase at a compounded rate of 1.5% over the next five years, from $207 billion in 2012 to $223.3 billion in 2017, says a new market report from Insight Research Corp.

Europe’s Telco Giants In Talks To Create Pan-European Network. Europe’s largest mobile network operators are considering pooling their resources to create pan-European network infrastructure, the FT is reporting. Mobile network operators are frustrated by a “disjointed European market” that’s making it harder for them to compete.

crystalball

“Internet of Things” gets new push. Ten Companies (Including Logitech) Team Up To Create The Internet Of Things Consortium article tell that your Internet-connected devices may be getting more cooperative, thanks to group of startups and established players who have come together to create a new nonprofit group called the Internet of Things Consortium.

Machine-to-Machine (M2M) communications are more and more used. Machine-to-machine technology made great strides in 2012, and I expect an explosion of applications in 2013. Mobile M2M communication offers developers a basis for countless new applications for all manner of industries. Extreme conditions M2M communication article tells that M2M devices often need to function in extreme conditions. According to market analysts at Berg Insight, the number of communicating machines is set to rise to around 270 million by 2015. The booming M2M market is due to unlimited uses for M2M communications. The more and more areas of life and work will rely on M2M.

Car of the future is M2M-ready and has Ethernet. Ethernet has already been widely accepted by the automotive industry as the preferred interface for on-board-diagnostics (OBD). Many cars already feature also Internet connectivity. Many manufacturers taking an additional step to develop vehicle connectivity. One such example is the European Commission’s emergency eCall system, which is on target for installation in every new car by 2015. There is also aim of Vehicle-to-Vehicle communications and Internet connectivity within vehicles is to detect traffic jams promptly and prevent them from getting any worse.

M2M branches beyond one-to-one links article tells that M2M is no longer a one-to-one connection but has evolved to become a system of networks transmitting data to a growing number of personal devices. Today, sophisticated and wireless M2M data modules boast many features.

The Industrial Internet of Things article tells that one of the biggest stories in automation and control for 2013 could be the continuing emergence of what some have called the Internet of Things, or what GE is now marketing as the Industrial Internet. The big question is whether companies will see the payback on the needed investment. And there are many security issues that needs to be carefully weighted out.

crystalball

Very high speed 60GHz wireless will be talked a lot in 2013. Standards sultan sanctifies 60GHz wireless LAN tech: IEEE blesses WiGig’s HDMI-over-the-air, publishes 802.11ad. WiFi and WiGig Alliances become one, work to promote 60GHz wireless. Wi-Fi, WiGig Alliances to wed, breed 60GHz progeny. WiGig Alliance’s 60GHz “USB/PCI/HDMI/DisplayPort” technology sits on top of the IEEE radio-based communications spec. WiGig’s everything-over-the-air system is expected to deliver up to 7Gbit of data per second, albeit only over a relatively short distance from the wireless access point. Fastest Wi-Fi ever is almost ready for real-world use as WiGig routers, docking stations, laptop, and tablet were shown at CES. It’s possible the next wireless router you buy will use the 60GHz frequency as well as the lower ones typically used in Wi-Fi, allowing for incredibly fast performance when you’re within the same room as the router and normal performance when you’re in a different room.

Communications on power line still gets some interest at least inside house. HomePlug and G.hn are tussling it out to emerge as the de-facto powerline standard, but HomePlug has enjoyed a lot of success as the incumbent.

Silicon photonics ushers in 100G networks article tells that a handful of companies are edging closer to silicon photonics, hoping to enable a future generation of 100 Gbit/s networks.

Now that 100G optical units are entering volume deployment, faster speeds are very clearly on the horizon. The push is on for a 400G Ethernet standard. Looking beyond 100G toward 400G standardization article tells that 400G is very clearly on the horizon. The push is now officially “on” for 400-Gigabit Ethernet standard. The industry is trying to avoid the mistakes made with 40G optics, which lacked any industry standards.

Market for free-space optical wireless systems expanding. Such systems are often positioned as an alternative to fiber-optic cables, particularly when laying such cables would be cost-prohibitive or where permitting presents an insurmountable obstacle. DARPA Begins Work On 100Gbps Wireless Tech With 120-mile Range.

914 Comments

  1. Tomi says:

    NSA loophole allows warrantless search for US citizens’ emails and phone calls
    http://www.theguardian.com/world/2013/aug/09/nsa-loophole-warrantless-searches-email-calls

    Exclusive: Spy agency has secret backdoor permission to search databases for individual Americans’ communications

    Reply
  2. Tomi says:

    Don’t worry, NSA says—we only “touch” 1.6% of daily global Internet traffic
    http://arstechnica.com/tech-policy/2013/08/dont-worry-nsa-sayswe-only-touch-1-6-of-daily-global-internet-traffic/

    New seven-page document from Ft. Meade defends agency’s activities and policies.

    On the same day that President Barack Obama spoke to the press about possible surveillance reforms—and released a related white paper on the subject—the National Security Agency came out with its own rare, publicly-released, seven-page document (PDF), essentially justifying its own practices.

    Reply
  3. Tomi says:

    4g finally becoming more common in Europe

    Nokia Solutions and Networks CEO says the awakening of 4G in the second half. Britain to prove his views on.

    Britain’s four largest mobile operator offers three genuine 4G service this month.

    While 4g ​​lte is quite widely available in the Nordic countries, the rest of Europe is just starting next generation networks. This month, Britain gets two new 4G-operators

    NSN’s CEO Rajeev Suri said Wednesday the news agency Reuters that the 4G LTE networks, which has been opened, above all, in Japan, South Korea and the United States, is expanding globally.

    “I believe that Europe is starting to launch LTE networks substantially in the second half of the year,” Suri said the same day that the NSN became a wholly owned by Nokia and the company was named the Nokia Solutions and Networks.

    According to the EU three out of four EU citizens can not get onto a 4g network in his hometown. 4G services in rural areas is not practical at all, write Expat Forum with reference to the EU’s report. 4g in the United States accounts for 90 percent of the population.

    Source: http://www.tietokone.fi/artikkeli/uutiset/4g_yleistyy_viimein_euroopassa

    Reply
  4. Tomi says:

    NSA by the numbers
    http://buzzmachine.com/2013/08/10/nsa-by-the-numbers/

    Fear not, says the NSA, we “touch” only 1.6% of daily internet traffic. If, as they say, the net carries 1,826 petabytes of information per day, then the NSA “touches” about 29 petabytes a day. They don’t say what “touch” means. Ingest? Store? Analyze? Inquiring minds want to know.

    For context, Google in 2010 said it had indexed only 0.004% of the data on the net. So by inference from the percentages, does that mean that the NSA is equal to 400 Googles? Better math minds than mine will correct me if I’m wrong.

    Seven petabytes of photos are added to Facebook each month. That’s .23 petabytes per day. So that means the NSA is 126 Facebooks.

    Keep in mind that most of the data passing on the net is not email or web pages. It’s media. According to Sandvine data for the U.S. fixed net from 2013, real-time entertainment accounted for 62% of net traffic, P2P file-sharing for 10.5%.

    HTTP — the web — accounts for only 11.8% of aggregated up- and download traffic in the U.S., Sandvine says. Communications — the part of the net the NSA really cares about — accounts for 2.9% in the U.S.

    So by very rough, beer-soaked-napkin numbers, the NSA’s 1.6% of net traffic would be half of the communication on the net.

    metadata doesn’t add up to much data at all; it’s just a few bits per file — who sent what to whom — and that’s where the NSA finds much of its incriminating information.

    Reply
  5. Tomi Engdahl says:

    CommScope airs Cat 8 vs. Cat 7a concerns re: 40G
    http://www.cablinginstall.com/articles/2013/07/commscope-airs-40g.html?cmpid=EnlCIMAugust122013

    CommScope has released a PDF white paper document entitled Category 8 Cabling Standards Update, which airs the company’s concerns surrounding the TIA TR42.7 Study Group’s decision to adopt ISO/IEC Class II cabling performance criteria into its pending ANSI/TIA-568-C.2-1 category 8 project, as recently reported in this space.

    “The approval of the new task group to study Class II channels is a recognition of the advances in PIMF cabling technology needed to support the use of balanced twisted copper cabling for 40 G applications,” admits CommScope. Provocatively, the company then adds that “this recognition is somewhat tempered by the fact that TIA TR42.7 has to deal with the issue of three different connector types allowed for Class II,” while contending that “the decision to develop Class II up to 2000 MHz is a clear indication that Category 7A cabling specified up to 1000 MHz will not be sufficient for 40 G applications.”

    Reply
  6. Tomi Engdahl says:

    ANSI/BICSI-005 standard addresses design and implementation of security systems
    http://www.cablinginstall.com/articles/2013/08/ansi-bicsi-005.html?cmpid=EnlCIMAugust122013

    BICSI recently released its latest ANSI-approved standard, ANSI/BICSI-005-2013 Electronic Safety and Security (ESS) System Design and Implementation Best Practices. When announcing the standard’s availability, BICSI commented, “As the systems used within security have become more complex, so too has the cabling infrastructure to address both communication and security requirements. Little has been written to support this convergence of security and cabling infrastructure, until now.”

    The association added that the BICSI-005 standard “bridges the two worlds of security and communications by providing the security professional with the requirements and recommendations of a structured cabling infrastructure needed to support today’s security systems while providing the cabling design professional information on different elements within safety and security systems that affect the design.”

    Jerry Bowman, BICSI’s president, noted of the standard, “The protection against risks and threats to life-safety, business and personal assets is and always will be a matter of great importance. BICSI-005 is a tremendous resource for those working on the design and implementation of electronic safety and security and related infrastructure for a variety of security functions and systems. We truly appreciate the efforts of all the volunteer subject matter experts who contributed to this publication.”

    Reply
  7. Tomi Engdahl says:

    OFS develops hollow-core fiber
    http://www.cablinginstall.com/articles/2013/08/ofs-hollow-core-fiber.html

    OFS has developed a Hollow-Core Fiber (HCF) that it says exceeds state-of-the-art performance and eliminates limitations which have inhibited applications of this potentially disruptive technology.

    The novel fiber design was developed by OFS Laboratories in the DARPA-funded Compact Ultra Stable Gyro for Absolute Reference (COUGAR) program, led by Honeywell International, Inc. The fiber employs an air filled core surrounded by glass webbing, and is first to demonstrate improvement in three key characteristics critical for such applications as high precision fiber-optic gyroscopes for inertial navigation.

    Hollow-core fibers allow light to propagate through free space rather than a solid glass core, making them an ideal waveguide in theory.

    Hollow-core fiber can be bent and coiled to tight bend radius while guiding light at speeds 30 percent faster than conventional fiber. In addition to high precision gyroscopes, there are many applications that will benefit from this technology

    Reply
  8. Tomi Engdahl says:

    TE unveils new Cat 6A unshielded cabling system for 10-GbE applications
    http://www.cablinginstall.com/articles/2013/08/te-cat6a-redesign.html

    The copper cabling and connectivity system is comprised of TE’s new SL Series AMP-TWIST jacks and panels, along with a reduced-diameter Cat 6A patch cord.

    TE says the new Cat 6A U/UTP system offers the first unshielded structured cable for 10 GbE transmission of up to 100M — ideal for accommodating today’s higher processing speeds and bandwidth demand. Benefiting from TE’s AirES cable and “inside-out” filler technologies, the cable is able to provide higher throughput and an increased signal-to-noise ratio in a smaller package.

    At approximately 0.285 inches in diameter and featuring 0.235″ reduced diameter patch cords, the company says the cable is up to 32% thinner than traditional Cat 6A cables

    Reply
  9. Tomi Engdahl says:

    APOLAN forms, exhorts PON, fiber enterprise over ‘traditional’ copper Ethernet designs
    http://www.cablinginstall.com/articles/2013/08/apolan-forms-aims-to-fiber-enterprise.html

    Seven major companies in the IT networking sphere have joined to form a new industry association promoting the use of passive optical LAN technology. The founding members of the newly announced Association for Passive Optical LAN (APOLAN) include:

    Corning
    IBM
    SAIC
    TE Connectivity
    Tellabs
    Zhone
    3M

    The association’s stated mission is to advocate for the education and global adoption of passive optical networks in the local area network (LAN) industry. APOLAN aims to spread the word about the benefits of the passive optical LAN fiber-optic architecture, which adapts the passive optical network (PON) technology commonly used in fiber to the home (FTTH) applications to drive enterprise networks.

    With bandwidth demands growing in the enterprise, the members of APOLAN see passive optical LANs as an ideal method to keep pace with requirements in a less expensive, more future-proof way than traditional workgroup switch-based LAN architectures.

    “Passive Optical LAN serves as the optimized means to deliver voice, video, data, wireless access, security and high-performance building automation for the federal government and commercial enterprise,” state’s the association’s website. “Passive Optical LAN saves money, energy and space.”

    “With data and video consumption forecast to grow between 7-10x in the next few years, the demand for highly cost-effective and high-quality voice, video, and data continues to grow in the enterprise LAN market space, making passive optical LAN an appealing solution to address current and future bandwidth demands,” comments Nav Chander, research manager, enterprise telecom at IDC, via an APOLAN press release.

    Reply
  10. Tomi Engdahl says:

    Google Fiber Continues Awful ISP Tradition of Banning “Servers”
    https://www.eff.org/deeplinks/2013/08/google-fiber-continues-awful-isp-tradition-banning-servers

    In a Wired piece published recently, Ryan Singel assails Google’s newfound hypocrisy when it comes to net neutrality. And he’s right. Having spent many years fighting to stop Internet Service Providers (ISPs) from discriminating between different types of Internet traffic, the tech giant is now perpetuating a long-standing form of that discrimation with Google Fiber, its own ISP, by adopting a terrible Terms of Service clause that bans the use of “servers.” Google’s ban on servers is sadly not a departure from the norm, as similar prohibitions can be found within the Terms of Service of other large ISPs.

    This norm is unreasonable – it is a power grab by ISPs that damages user freedom and chills innovation of different types of Internet-based technologies that don’t follow the traditional centralized model.

    What’s a “server” anyway?

    The first problem with prohibiting servers is that there is no good definition of a server. The notions of servers and clients can be very useful when illustrating how many basic web services work, but the distinction quickly gets blurry in practice. When you run peer-to-peer services like BitTorrent, your computer is acting both as a client and a server. And these services aren’t limited to BitTorrent, as the peer-to-peer approach has garnered attention as a distribution mechanism for traditional media as well, and is part of the architecture of many mainstream services like Skype and Spotify.

    No ISP will come forward with a tighter definition of “server” because they want to give themselves leeway to ban users and technologies that they deem to be troublemakers. This strategy of making incredibly broad, vague, and one-sided contracts is deeply problematic and unfair towards users, and it’s disheartening to see Google follow this well-trodden path.

    Why shouldn’t we run servers?

    Beyond the vagueness of what makes a “server,” the next natural question is why this prohibition against servers should exist in the first place. Users have a diverse set of needs, and many of us regularly make use of servers that we run on home networks.

    There can be major privacy and security benefits to running your own server. Running an SSH or VPN server allows me to remotely connect to a home computer and trusted network, and running a mail server allows me to store my email locally, hence enjoying greater Constitutional protections for my email. Moreover, projects like FreedomBox – which aim to enhance security and privacy by giving users more control over their communication and social networking data – very much depend on users being able to run programs that could easily be deemed “servers.”

    Servers can be used in all sorts of clever ways. If the ban on running servers were lifted, ordinary Internet users would be able to do a multitude of interesting things with fewer barriers, spurring innovation. This will be even more true in the coming years, especially if IPv6 adoption obsoletes a technology called NAT (which stands for Network Address Translation) that currently creates a barrier to running some types of servers (like web servers) from home networks.

    Arguments that ISPs need to have this anti-server policy for business reasons are specious

    Reply
  11. Tomi Engdahl says:

    Why a custom-length outside-plant cable costs so much
    http://www.cablinginstall.com/articles/2013/07/cost-of-cable-cuts.html?cmpid=EnlCIMAugust122013

    Among the interesting realities for the supplier is the scrapping of shorter lengths of cable. Whereas some of a copper cable’s production cost can be recovered through recycling (about 60 percent on average), “For fiber cables, the loss of scrap is 100 percent,” Superior Essex says. “There is no cost recovery obtained scrapping a length of fiber cable, even though the plastics and steel used to make the cable are sometimes recyclable. The manufacturer or distributor will not typically receive any compensation for sending a fiber cable to a recycler.”

    Reply
  12. Tomi Engdahl says:

    Fiber-optic cabling in strange and unusual places
    http://www.cablinginstall.com/articles/print/volume-20/issue-9/features/fiber-optic-cabling-in-strange-and-unusual-places.html?cmpid=EnlCIMAugust122013

    Many of the unusual environments in which you find fiber-optic cabling are physically demanding on the infrastructure and require it to have rugged physical characteristics.

    Field-terminated solutions

    The traditional field-terminated solution is a hands-on craftsman method. This solution allows more installation flexibility but also requires a skilled-labor team, as well as additional equipment and time during the installation process. The three primary components to this solution are cable, hardware and connectors. Each component must meet or exceed the applicable industry standard to ensure long-term reliable performance.

    Tray-rated optical cables are specifically engineered for harsh environment installations. These cables have an increased tensile load along with higher impact- and crush-resistance than standard cables

    Additional mechanical protection can be provided to a cable by including a corrugated or interlocking-armored layer to the cable construction

    This harsh-environment fiber-optic hardware is rated NEMA 4X or IEC IP66, meaning the housing’s inside is completely sealed and protected against rodent infiltration, air-blown dust, water ingress, accidental contact with equipment, extreme weather conditions and/or corrosive agents.

    Preterminated solutions

    Unlike the traditional field-terminated solution, the preterminated or factory-terminated solution requires more upfront work during the design phase. Each connection point must be precisely measured from tip to tip. With preterminated solutions, the design must be exactly right on the money with where everything is going to go. The factory customizes the preterminated solution based on provided length measurements. Once the system is manufactured, there isn’t a way to access the cables in the field without destroying the pre-engineered integrity originally desired from a factory-terminated and -tested solutions.

    However, the benefits of a preterminated system can far outweigh these factors. Deploying a preterminated factory solution dramatically saves time and money during installation, which is a must if the facility is already operational.

    Which is best?

    The key difference between field-terminated and preterminated solutions is in the controlled environment where the termination and testing takes place. With preterminated solutions, peace of mind comes from having these processes performed in a controlled environment, while being able to provide the highest quality insertion/return loss and performance. With a field-terminated solution, it is dependent on the installer the day of the install combined with the conditions in the harsh environment.

    Installation best practices

    When looking at best practices while installing fiber-optic systems in any environment, safety should always come first. The working conditions of installation in a harsh environment will be less than ideal. The installer must be aware of what’s going on around them at all times. The rugged and potentially hazardous nature of their work area can affect the quality of the install, not to mention their physical safety.

    In a harsh environment, proper cleaning is a concern and is the most common cause of network downtime and errors. Whether deploying a field-terminated or preterminated solution, it is vital that all connectors are properly handled and cleaned during the system’s installation and before mating any connector. The slightest dust particle or chemical debris can greatly affect the overall performance of the network. Tools, such as port cleaners and connector cleaning cassettes, can make the task much more efficient and easier than other methods.

    Reply
  13. Tomi Engdahl says:

    REVEALED: Simple ‘open sesame’ to unlock your HOME by radiowave
    Schoolboy security slip-ups in burglar sensors, electronic locks discovered
    http://www.theregister.co.uk/2013/08/13/wave_goodbye_to_security_with_zwave/

    Black Hat A pair of security researchers probing the Z-Wave home-automation standard managed to unlock doors and disable sensors controlled by the technology.

    Behrang Fouladi and Sahand Ghanoun took a long hard look at Z-Wave for their presentation at last week’s Black Hat hacking conference in Las Vegas. The wireless standard dominates home-automation in the US, but the pair discovered some worrying flaws.

    Not only were they able to switch off a motion sensor with a relatively simple replay attack, but they also managed to take control of a wireless door lock by supplanting the proper control centre, potentially allowing a burglar to walk right in and make himself comfortable.

    The Z-Wave specifications are only available to paying customers after they’ve signed the non-disclosure agreement, which makes analysis of the standard difficult by preventing open discussion of potential flaws. It also makes manufacturers lazy in their implementations, which proved crucial to the success of the hack.

    There’s very little open-source code available for the unpublished standard, but by extending the OpenZ-Wave toolkit the pair were able to analyse over-the-air communications with a motion sensor and discovered it was vulnerable to a simple replay attack.

    That shouldn’t be possible – replay attacks are the most basic of penetrative techniques and any modern system should be immune to them, but for some reason the tested Z-Wave sensor wasn’t.

    More formidable was a Z-Wave door lock, as it should be. Commands sent to the lock from the network controller are encrypted using AES128, well beyond the reach of all but the best-funded government agencies, but as is so often the case it’s the implementation, not the encryption, that proved to be flawed.

    An automated home will have a single Z-Wave network, operating in the low-frequency industrial, scientific and medical (ISM) band (868MHz in Europe and 908MHz in the US), and each network is secured with a unique network key.

    That network key is created by the device that appoints itself as network controller (normally a home hub of some sort) and distributed to other devices when they join the network, encrypted using a global key hard coded onto every Z-Wave device

    The Z-Wave global key is only used during network setup – meaning it is of no value to anyone attacking an established network even if it remains a concern in some circumstances. But it turns out the global key isn’t necessary to hijack at least one model of lock.

    When the lock is first set up, and receives the network key from the controller, the user is required to press a physical button on the bottom of the keypad to acknowledge the new device. But once installed the lock can reconnect to a controller (say, after a battery failure) without user interaction, and it turns out that it isn’t very picky about the network controller to which it connects.

    Our attacker just identifies a lock on the network and sends it a new network key from his own network controller; the fickle door lock happily forgets its previous attachment and stands ready to respond to new commands, suitably encrypted using the new key, such as “open the door, please”.

    More testing is needed, but the pair’s hypothesis is that both companies are using example code provided in the Z-Wave software development kit, and that the example code is intended to be just that – an example not to intended for use in actual products.

    Reply
  14. Tomi Engdahl says:

    US To Standardize Car App/communication Device Components
    http://news.slashdot.org/story/13/08/12/1751209/us-to-standardize-car-appcommunication-device-components

    “The U.S. Department of Transportation has high hopes of standardizing the way autos talk to each other and with other intelligent roadway systems of the future.”

    Reply
  15. Tomi Engdahl says:

    US to standardize car app/communication device components
    DOT wants what it calls a four layer approach to connected vehicle devices and applications certification
    http://www.networkworld.com/community/blog/us-standardize-car-appcommunication-device-components

    The US Department of Transportation has high hopes of standardizing the way autos talk to each other and with other intelligent roadway systems of the future.

    The department recently issued a call for public and private researchers and experts to help it build what the DOT called “a hypothetical four layer approach to connected vehicle devices and applications certification.”

    The idea is to develop certification ensures that different components of intelligent travel systems that are manufactured according to connected vehicle technology requirements will be trusted by the system and by users, the DOT stated. With national interoperability comes the opportunity to establish national standards and criteria for certification of individual products that will have access to the system, system processes, and operational procedures.

    Certification research will be primarily focused on understanding the needs for device compliance, system security, and privacy requirements

    Reply
  16. Tomi Engdahl says:

    The NSA Is Commandeering the Internet
    Technology companies have to fight for their users, or they’ll eventually lose them.
    Bruce Schneier Aug 12 2013, 10:05 AM E
    http://www.theatlantic.com/technology/archive/2013/08/the-nsa-is-commandeering-the-internet/278572/

    It turns out that the NSA’s domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we’ve learned, fight and lose. Others cooperate, either out of patriotism or because they believe it’s easier that way.

    I have one message to the executives of those companies: fight.

    The NSA doesn’t care about you or your customers, and will burn you the moment it’s convenient to do so.

    We’re already starting to see that. Google, Yahoo, Microsoft and others are pleading with the government to allow them to explain details of what information they provided in response to National Security Letters and other government demands. They’ve lost the trust of their customers, and explaining what they do — and don’t do — is how to get it back. The government has refused; they don’t care.

    It will be the same with you.

    This is why you have to fight. When it becomes public that the NSA has been hoovering up all of your users’ communications and personal files, what’s going to save you in the eyes of those users is whether or not you fought. Fighting will cost you money in the short term, but capitulating will cost you more in the long term.

    Already companies are taking their data and communications out of the US.

    The extreme case of fighting is shutting down entirely. The secure e-mail service Lavabit did that last week

    The same day, Silent Circle followed suit, shutting down their email service in advance of any government strong-arm tactics

    It’s time we called the government’s actions what it really is: commandeering. Commandeering is a practice we’re used to in wartime, where commercial ships are taken for military use, or production lines are converted to military production. But now it’s happening in peacetime. Vast swaths of the Internet are being commandeered to support this surveillance state.

    Journalism professor Jeff Jarvis recently wrote in The Guardian: “Technology companies: now is the moment when you must answer for us, your users, whether you are collaborators in the US government’s efforts to ‘collect it all’ — our every move on the internet or whether you, too, are victims of its overreach.”

    Reply
  17. Tomi Engdahl says:

    Huawei to provide 400G core links for Thai mobile operator
    http://www.cablinginstall.com/articles/2013/08/huawei-thai-400g.html

    Huawei Technologies has been selected by True, a communications service provider in Thailand, to provide millions of True’s 3G subscribers with diverse services using Huawei 400G core router line cards.

    The development of 3G mobile network services and extensive usage of intelligent terminals for online video services has fomented an explosive growth in data traffic in Asia. This has created an urgent need for True, a communication conglomerate offering TV, broadband, mobile, and electronic cash and payment services in Thailand

    Huawei asserts its core router NE5000E supports flexible networking of 100 Gigabit Ethernet ports, 40 Gigabit Ethernet ports, and 10 Gigabit Ethernet ports to provide an aggregate 400G capacity.

    Huawei says its tests indicate that the energy efficiency ratio of Huawei 400G line cards is less than 1W/G.

    Reply
  18. Tomi Engdahl says:

    40-, 100 Gigabit Ethernet seen topping $4B by 2017, driven by cloud
    http://www.cablinginstall.com/articles/2013/07/delloro-40-100g.html

    Dell’Oro Group is forecasting that, within the larger Ethernet switch market, revenues for 40 Gigabit Ethernet and 100 Gigabit Ethernet will exceed $4 billion by 2017.

    According to the firm’s latest research, the L2-3 Ethernet Switch market is forecast to approach $25B in 2017, with future growth to be driven primarily by sales of higher speed Ethernet switches optimized for larger data center deployments, as the core of the data center quickly migrates to 40 Gigabit and 100 Gigabit Ethernet.

    “The data center will be the site of almost all revenue growth during the forecast horizon, as the cloud forever changes how networks are built,”

    Reply
  19. Tomi Engdahl says:

    Management tools bridge cabling with networks’ higher layers
    http://www.cablinginstall.com/articles/print/volume-21/issue-8/features/management-tools-bridge-cabling-with-networks-higher-layers.html?cmpid=$trackid

    Automated infrastructure management, DCIM and more advanced capabilities aim to eliminate the blind spot in full-network management.

    The management and administration of a network’s physical layer–primarily the structured cabling system–has advanced technologically over several years to the point at which latest-available technologies stake a claim to erasing the physical layer’s status as a network-management blind spot.

    “You as data center professionals are often acting in hero mode. You’re being asked to do more with less. You’re asked to improve uptime and SLAs [service level agreements], to optimize with existing resources. You’re being asked to do this with fewer people, with less power, and with less equipment. This was a difficult task when data centers were simple and not so dense. As years have gone on, with data center consolidation, the boom of mobile applications and the need for hosted computing applications, data centers have become much denser and more complicated.”

    He then presented the scenario that clinches the DCIM value proposition: “You as data center professionals are being asked to do this with the same spreadsheets and drawings you’ve always used to manage your data center. A typical data center operation I review still manages enterprise-class data centers with a collection of spreadsheets that contains data that is impossible to correlate. All that data cannot possibly be correlated together to give that data center professional the information they need at their fingertips when they need it.

    “This lack of an integrated system to provide that information, at their fingertips, with visualization, causes data center professionals to do what they have always been doing–to manually walk the data center. If they need to add a piece of equipment, they look at their spreadsheet to see where they might have available space. They look at a different spreadsheet to see where they might have available network connections and power connections. And they’ll visually walk the data center to confirm the information in the spreadhseets–because that information in the spreadsheets can be unreliable. After walking the data center in hopes of seeing space where they can place a piece of equipment or make connections, they go back to implement the change before someone else reserves and takes that space.”

    Malone expanded on the troubleshooting scenario: “Sixty-five to seventy percent of network outages have to do with something on the physical layer. A lot of time is spent figuring out where the problem is.” It is estimated that about 90 percent of troubleshooting time is spent diagnosing, identifying and/or locating the problem, with 10 percent spent actually resolving it. The Quareo solution, he explained, drives the diagnose/identify/locate time down significantly.

    Networks of all types have become increasingly complex and the tools to manage them more sophisticated. Current technologies allow integration of physical-layer management with other network layers that was not practical a few years ago.

    Reply
  20. Tomi Engdahl says:

    NSA “touches” more of Internet than Google
    In deep packet inspection, it’s not the size of the data that matters.
    http://arstechnica.com/information-technology/2013/08/the-1-6-percent-of-the-internet-that-nsa-touches-is-bigger-than-it-seems/

    According to figures published by a major tech provider, the Internet carries 1,826 Petabytes of information per day. In its foreign intelligence mission, NSA touches about 1.6 percent of that. However, of the 1.6 percent of the data, only 0.025 percent is actually selected for review. The net effect is that NSA analysts look at 0.00004 percent of the world’s traffic in conducting their mission—that’s less than one part in a million.

    Put another way, if a standard basketball court represented the global communications environment, NSA’s total collection would be represented by an area smaller than a dime on that basketball court.

    The numbers are no real surprise—we’ve already discussed how the laws of physics would make it impossible for the NSA to capture everything, or even a significant portion of everything, that passes over the Internet. But they’re also misleading. In the world of deep packet inspection, verbs like “touch,” “select,” “collect,” and “look at” don’t begin to adequately describe what is going on or what information is extracted from traffic in the process. Considering all that’s within what flows across the Internet, 1.6 percent could hold a significant portion of the metadata describing person-to-person communications.

    While 29.21 petabytes is a fraction of the overall traffic on the Internet, it is the equivalent of the traffic that passes through several major Internet exchanges each day. It amounts roughly to 2.77 terabits per second—more than the average throughput of the Equinix exchange network, the CoreSite Any2 Exchange, New York International Internet Exchange (NYIIX), and Seattle Internet Exchange (SIX) combined. In other words, the 1.6 percent of the total of Internet traffic “touched” by the NSA could easily contain much of the traffic passing through the US’ core networks. It can certainly include all the traffic inbound from and outbound to other nations.

    The NSA has approximately 150 XKeyscore collection points worldwide. To reach 29.21 petabytes per day, XKeyscore sites pull in around 190 terabytes a day. And to keep the three-day “buffer” XKeyscore holds of captured traffic, that would mean the sites have an average of about 600 terabytes of storage—the equivalent of a fairly manageable 150 4-TB drives.

    Regardless how much data flows through the NSA’s tap points, all of it is getting checked. While the NSA may “touch” only 29.21 petabytes of data a day, it runs its digital fingers through everything that flows through the tap points to do so.

    The NSA’s XKeyscore uses packet analyzers, the hardware plugged into the network that diverted Internet data is routed down, to look at the contents of network traffic as it passes by. The packet analyzers use a set of rules to check each packet they “see” as it is read by the analyzers’ software into memory.

    Packets that don’t meet any of the rules that have been configured are sent along unmolested.

    Packets that match one or more of the rules get routed to processing servers for further analysis. Those rules can be very broad—”grab everything with an IP address in its header that is outside the United States,” for example—or they can look for very specific patterns within packets, such as those of VPN and website log-ins, Skype and VoIP traffic, or e-mails with attachments.

    Reply
  21. Tomi Engdahl says:

    What the NSA can do with “big data”
    The NSA can’t capture everything that crosses the Internet—but doesn’t need to.
    http://arstechnica.com/information-technology/2013/06/what-the-nsa-can-do-with-big-data/

    One organization’s data centers hold the contents of much of the visible Internet—and much of it that isn’t visible just by clicking your way around. It has satellite imagery of much of the world and ground-level photography of homes and businesses and government installations tied into a geospatial database that is cross-indexed to petabytes of information about individuals and organizations. And its analytics systems process the Web search requests, e-mail messages, and other electronic activities of hundreds of millions of people.

    No one at this organization actually “knows” everything about what individuals are doing on the Web, though there is certainly the potential for abuse. By policy, all of the “knowing” happens in software, while the organization’s analysts generally handle exceptions (like violations of the law) picked from the flotsam of the seas of data that their systems process.

    We know some of this thanks to an earlier whistleblower—former AT&T employee Mark Klein, who revealed in 2006 that AT&T had helped NSA install a tap into the fiber backbone for AT&T’s WorldNet, “splitting” the traffic to run into a Narus Insight Semantic Traffic Analyzer. (The gear has since been rebranded as “Intelligence Traffic Analyzer,” or ITA.)

    Narus’ gear was also used by the FBI as a replacement for its homegrown “Carnivore” system. It scans packets for “tag pairs”—sets of packet attributes and values that are being monitored for—and then grabs the data for packets that match the criteria.

    In an interview I conducted with Narus’ director of product management for cyber analytics Neil Harrington in September of 2012, Harrington said the company’s Insight systems can analyze and sort gigabits of data each second. “Typically with a 10 gigabit Ethernet interface, we would see a throughput rate of up to 12 gigabits per second with everything turned on. So out of the possible 20 gigabits, we see about 12. If we turn off tag pairs that we’re not interested in, we can make it more efficient.”

    A single Narus ITA is capable of processing the full contents of 1.5 gigabytes worth of packet data per second. That’s 5400 gigabytes per hour, or 129.6 terabytes per day, for each 10-gigabit network tap. All that data gets shoveled off to a set of logic servers using a proprietary messaging protocol, which process and reassemble the contents of the packets, turning petabytes per day into gigabytes of tabular data about traffic—the metadata of the packets passing through the box— and captured application data.

    NSA operates many of these network tap operations both in the US and around the world.

    Storing it, indexing it, and analyzing it in volume required technology beyond what was generally available commercially. Considering that, according to Cisco, the total world Internet traffic for 2012 was 1.1 exabytes per day is physically impossible, let alone practical, for the NSA to capture and retain even a fraction of the world’s Internet traffic on a daily basis.

    There’s also the issue of intercepting packets protected by Secure Socket Layer (SSL) encryption. Breaking encryption of SSL-protected traffic is, under the best of circumstances, computationally costly and can’t be applied across the whole of Internet traffic (despite the apparent certificate-cracking success demonstrated by the Flame malware attack on Iran). So while the NSA can probably do it, they probably can’t do it in real-time.

    NSA is still collecting call data records for all domestic calls and calls between US and foreign numbers

    “comprehensive communications routing information, including but not limited to session identifying information (e.g., originating and terminating telephone number, International Mobile Subscriber Identity (IMEI) number, etc.), trunk identifier, telephone calling card numbers, and time and duration of call.”

    In 2006, USA Today called the call database “the largest database in the world.”

    BigTable and Hadoop-based databases offered a way to handle huge amounts of data being captured by the NSA’s operations, but they lacked something critical to intelligence operations: compartmentalized security (or any security at all, for that matter). So in 2008, NSA set out to create a better version of BigTable, called Accumulo—now an Apache Foundation project.

    Accumulo is a “NoSQL” database, based on key-value pairs. It’s a design similar to Google’s BigTable or Amazon’s DynamoDB, but Accumulo has special security features designed for the NSA, like multiple levels of security access. The program is built on the open-source Hadoop platform and other Apache products.

    One of those is called Column Visibility—a capability that allows individual items within a row of data to have different classifications.

    Accumulo also can generate near real-time reports from specific patterns in data. So, for instance, the system could look for specific words or addressees in e-mail messages that come from a range of IP addresses; or, it could look for phone numbers that are two degrees of separation from a target’s phone number. Then it can spit those chosen e-mails or phone numbers into another database, where NSA workers could peruse it at their leisure.

    In other words, Accumulo allows the NSA to do what Google does with your e-mails and Web searches—only with everything that flows across the Internet, or with every phone call you make.

    One of the obstacles to NSA monitoring of Internet communications is SSL. On the surface, “cloud” services such as Gmail, Facebook, and the service formerly known as Hotmail have made that problem harder to overcome as they’ve pulled more interactions in behind SSL-protected sessions. But ironically, those communications services actually started to make it easier for the NSA to collect that protected data through the PRISM program.

    PRISM gives the NSA an online connection to cloud providers.

    The NSA could theoretically export much of the metadata from these services—without having a specific target—in order to preserve data in the event that the NSA has cause to perform a search. But it’s unlikely, simply for storage capacity reasons, that they copy the application data itself—e-mails, attachments, etc.—on a large scale.

    The NSA could theoretically export much of the metadata from these services—without having a specific target—in order to preserve data in the event that the NSA has cause to perform a search. But it’s unlikely, simply for storage capacity reasons, that they copy the application data itself—e-mails, attachments, etc.—on a large scale.

    Reply
  22. Tomi Engdahl says:

    Building a panopticon: The evolution of the NSA’s XKeyscore
    How the NSA went from off-the-shelf to a homegrown “Google for packets.”
    http://arstechnica.com/information-technology/2013/08/building-a-panopticon-the-evolution-of-the-nsas-xkeyscore/

    The National Security Agency’s (NSA) apparatus for spying on what passes over the Internet, phone lines, and airways has long been the stuff of legend, with the public catching only brief glimpses into its Leviathan nature. Thanks to the documents leaked by former NSA contractor Edward Snowden, we now have a much bigger picture.

    After the attacks of September 11, 2001 and the subsequent passage of the USA PATRIOT Act, the NSA and other organizations within the federal intelligence, defense, and law enforcement communities rushed to up their game in Internet surveillance. The NSA had already developed a “signals intelligence” operation that spanned the globe. But it had not had a mandate for sweeping surveillance operations—let alone permission for it—since the Foreign Intelligence Surveillance Act (FISA) was passed in 1978. (Imagine what Richard Nixon could have done with Facebook monitoring.)

    Early on, the NSA needed a quick fix. It got that by buying largely off-the-shelf systems for network monitoring, as evidenced by the installation of hardware from Boeing subsidiary Narus at network tap sites such as AT&T’s Folsom Street facility in San Francisco. In 2003, the NSA worked with AT&T to install a collection of networking and computing gear—including Narus’ Semantic Traffic Analyzer (STA) 6400—to monitor the peering links for AT&T’s WorldNet Internet service. Narus’ STA software, which evolved into the Intelligent Traffic Analyzer line, was also used by the FBI as a replacement for its Carnivore system during that time frame.

    Narus’ system is broken into two parts. The first is a computing device in-line with the network that watches the metadata in the packets passing by for ones that match “key pairs,” which can be a specific IP address or a range of IP addresses, a keyword within a Web browser request, or a pattern identifying a certain type of traffic such as a VPN or Tor connection.

    Packets that match those rules are thrown to the second part of Narus’ system—a collection of analytic processing systems—over a separate high-speed network backbone by way of messaging middleware similar to the transaction systems used in financial systems and commodity trading floors.

    In the current generation of Narus’ system, the processing systems run on commodity Linux servers and re-assemble network sessions as they’re captured, mining them for metadata, file attachments, and other application data and then indexing and dumping that information to a searchable database.

    There are a couple of trade-offs with Narus’ approach. For one thing, the number of rules loaded on the network-sensing machine directly impact how much traffic it can handle—the more rules, the more compute power burned and memory consumed per packet, and the fewer packets that can be handled simultaneously. When I interviewed Narus’ director of product management for cyber analytics Neil Harrington last year, he said that “with everything turned on” on a two-way, 10-gigabit Ethernet connection—that is, with all of the pre-configured filters turned on—”out of the possible 20 gigabits, we see about 12. If we turn off tag pairs that we’re not interested in, we can make it more efficient.”

    In other words, to handle really big volumes of data and not miss anything with a traffic analyzer, you have to widen the scope of what you collect. The processing side can handle the extra data—as long as the bandwidth of the local network fabric isn’t exceeded and you’ve added enough servers and storage. But that means that more information is collected “inadvertently” in the process. It’s like catching a few dolphins so you don’t miss the tuna.

    Collecting more data brings up another issue: where to put it all and how to transport it. Even when you store just the cream skimmed off the top of the 129.6 terabytes per day that can be collected from a 10-gigabit network tap, you’re still faced with at least tens of terabytes of data per tap that need to be written to a database. The laws of physics prevented the NSA from moving all that digested data back over its own private networks to a central data center; getting all the raw packets collected by the taps back home was out of the question.

    All of these considerations were behind the design of XKeyscore. Based on public data (such as “clearance” job listings and other sources), the NSA used a small internal startup-like organization made up of NSA personnel and contract help from companies such as defense contractor SAIC to build and maintain XKeyscore.

    Built with the same fundamental front-end principles (albeit with some significant custom code thrown in, XKeyscore solved the problem of collecting at wire speed by dumping a lot more to a local storage “cache.” And it balanced the conflict between minimizing how much data got sent home to the NSA’s data centers and giving analysts flexibility and depth in how they searched data by using the power of Web interfaces like Representation State Transfer (REST).

    XKeyscore takes the data brought in by the packet capture systems connected to the NSA’s taps and processes it with arrays of Linux machines. The Linux processing nodes can run a collection of “plugin” analysis engines that look for content in captured network sessions; there are specialized plugins for mining packets for phone numbers, e-mail addresses, webmail and chat activity, and the full content of users’ Web browser sessions. For selected traffic, XKeyscore can also generate a full replay of a network session between two Internet addresses.

    Reply
  23. Tomi Engdahl says:

    Mobile subscriber growth in the U.S. slows to a standstill
    http://gigaom.com/2013/08/13/mobile-subscriber-growth-in-the-u-s-slows-to-a-standstill/

    The mobile industry saw its slowest quarter of overall subscriber growth since the dawn of the cellular age. Without new customers to connect, carriers are stealing them from one another and looking toward M2M for future growth.

    U.S. mobile carriers added only 139,000 new connections to their networks in the second quarter, making it the most lackluster period of growth in the modern age of mobile, according to a new report from Chetan Sharma Consulting.

    Creating new mobile subscribers has become increasingly difficult for carriers in recent years the mobile phone proliferate across society, but operators were hoping to keep the market humming along by connecting tablets, cameras, cars, farm equipment and every manner of object in the emerging internet of things. With the exception of tablets, that’s clearly not happening.

    Reply
  24. Tomi Engdahl says:

    Infographic: An Amazing Atlas of the World Wide Web
    http://www.wired.com/design/2013/08/infographic-these-beautiful-visualization-create-an-atlas-of-the-world-wide-web/

    Aizenberg created the Atlas of the World Wide Web, a 120-page visual guide to how the internet has blurred the traditional, physical borders around the world. The atlas’ six chapters, which span everything from IP addresses to internet infrastructure to e-commerce, feature striking visualizations that highlight often unnoticed trends brought on by the spread of the internet.

    Reply
  25. Tomi Engdahl says:

    Cisco to Cut 4,000 Jobs, Blames Weak Economic Recovery
    http://online.wsj.com/article_email/SB10001424127887323639704579013113620184906-lMyQjAxMTAzMDEwNDExNDQyWj.html

    The Silicon Valley network-equipment giant on Wednesday said it would cut 4,000 jobs, or 5% of its workforce, despite reporting an 18% jump in profit in the fourth fiscal quarter.

    John Chambers, Cisco’s chief executive, blamed the decision largely on a disappointing economic recovery that is affecting particular countries and product lines in different ways.

    “What we see is slow steady improvement, but not at the pace we want,” Mr. Chambers told analysts on a conference call.

    While orders from customers in the Americas rose 5% in the fourth period, for example, orders from Asia declined 3%—and its business in China fell 6%.

    Cisco, the biggest maker of networking equipment, often sees business trends sooner than other companies. Consequently, its results and comments about business conditions are often seen as a harbinger of things to come for other hardware and software companies.

    Reply
  26. Tomi Engdahl says:

    Wireless Devices Go Battery-Free With New Communication Technique
    http://hardware.slashdot.org/story/13/08/14/2139257/wireless-devices-go-battery-free-with-new-communication-technique

    “[E]ngineers have created a new wireless communication system that allows devices to interact with each other without relying on batteries or wires for power. The new communication technique, which the researchers call ‘ambient backscatter,’ takes advantage of the TV and cellular transmissions that already surround us around the clock. Two devices communicate with each other by reflecting the existing signals to exchange information.”

    Reply
  27. Tomi Engdahl says:

    MOST and AVB: Two candidates for next-gen automotive infotainment networks
    http://www.edn.com/design/automotive/4419501/MOST-and-AVB–Two-candidates-for-next-gen-automotive-infotainment-networks

    MOST150 is the prevalent networking technology for the coming years in automotive infotainment. Although MOST150 is only at the beginning of mass deployment, pre-development of the next generation has already started. With IEEE802.1 AVB, another concept has entered the discussion about the successor to MOST150. AVB needs to be considered, appropriately evaluated and compared with current and evolving MOST concepts.

    The MOST Technology framework is the dominant technology in automotive telematics and infotainment networks, and has been smoothly brought to the road in its third generation with MOST150 in 2012. The MOST Cooperation has proven to be an excellent body, permitting very direct specification and implementation processes for the benefit of both OEMs and suppliers. The close and open cooperation of OEMs and suppliers is the key factor of this success story. MOST150 systems will continue to boom in car models as the dominant infotainment network system until at least 2020.

    With IEEE802.1 Audio-/Video-Bridging standards [1-5], packet-based synchronized streaming has become a competitive alternative in media streaming applications. Ethernet Network Interfaces supporting AVB have just been launched in consumer products [6] and AVB will conceivably become a common feature in CE devices which commonly use IEEE802.3 Ethernet and IEEE802.11 WiFi LAN technologies.

    Here, another trend has to be taken into account, as Ethernet is also currently evaluated in the automotive industry as a future technology option in automotive domains adjacent to the telematics domain.

    Due to these facts and driven by new use cases, cross-domain interoperability is rapidly gaining importance and the applicability of AVB concepts and their interoperability with MOST need to be evaluated.

    Regarding automotive infotainment systems, a crucial feature is the seamless integration of audio/video (A/V) interfaces in the network interface controllers. A/V streams should be directly transferred to corresponding A/V interfaces, thus unloading the host processor from handling A/V tasks. The MOST150 INIC already integrates various interfaces such as MediaLB, TSI, SPI, I2C and I2S, benefitting from its synchronous network layer transport.

    AVB-enabled Ethernet NICs can, in principle, provide similar interfaces by separating AVB streams from other traffic on the Data Link Layer, taking care that the AV streams do not burden the host processor. However, such functions are not found in today’s NIC implementations

    Both technologies are essentially independent of baseband physical layers. MOST150 INIC interconnects the FOT using LVDS interfaces, whereas Ethernet MACs connect to Ethernet PHY units with Media Independent Interface variants. Otherwise, choice of a physical layer, whether on optical media, electrical over twisted pairs or coax, is to a large extent independent of the approach taken on the data link layer.

    Whereas MOST has been designed and developed for automotive use and automotive qualification, for AVB a lot of experience needs to be gained regarding automotive use of the standards and realization of automotive requirements. This goes from the API level to service-discovery and control mechanisms, to wake-up-strategies and -times, to power consumption, just to name a few. AVB generation 2, which is only in standardization now, will address some crucial issues, including the handling of high frequency/low volume data such as sensor data and further minimization of latency by preemption of packets

    Both MOSTnG and AVB can be considered technologically feasible concepts, sufficient to meet the demands of next generation infotainment networks.

    There is a great deal of work to do for automotive use of AVB. Although its standardization enables flexible usage that results in a prolific supplier and market situation, the standardization processes are challenging, with lots of voices from non-automotive interest groups in manifold international committees and working groups.

    With these points in mind, at the moment it is very difficult to identify clear decisive factors on a technical level for one approach or the other. Regarding market size and heterogeneity on each side, especially in the long term, AVB is an option that needs to be seriously considered, at least. As for automotive usage, primarily in the infotainment domain, AVB needs to be evaluated and its performance needs to be compared to the MOST system.

    Reply
  28. Tomi Engdahl says:

    In Snowden’s wake, China will probe IBM, Oracle, and EMC for security threats
    http://qz.com/115970/in-snowdens-wake-china-will-probe-ibm-oracle-and-emc-for-security-threats/

    The Edward Snowden scandal is about to become a major headache for some US tech firms, as the Chinese government prepares to probe IBM, Oracle, and EMC over “security issues,” according to the official Shanghai Securities News.

    “At present, thanks to their technological superiority, many of our core information technology systems are basically dominated by foreign hardware and software firms, but the Prism scandal implies security problems,” an anonymous source told Shanghai Securities News, according to a Reuters report.

    IBM, the world’s largest IT company, Oracle, the biggest enterprise software firm, and EMC, a leading cloud computing and Big Data provider, all have substantial businesses in China that could be damaged if Beijing takes a hard line on potential NSA intrusions—much as China-based Huawei, the world’s biggest vendor of telecom equipment, has been largely blocked from doing business in the United States.

    Investigators at China’s Ministry of Public Security and a cabinet-level research center will reportedly carry out the probe.

    Previously China’s state-run media, which is often used to signal government policy, identified eight US companies—Cisco, IBM, Google, Qualcomm, Intel, Apple, Oracle, and Microsoft—as US government proxies that posed a “terrible security threat.”

    Reply
  29. Tomi Engdahl says:

    Google’s downtime caused a 40% drop in global traffic
    https://engineering.gosquared.com/googles-downtime-40-drop-in-traffic

    Google.com was down for a few minutes between 23:52 and 23:57 BST on 16th August 2013. This had a huge effect in the number of pageviews coming into GoSquared’s real-time tracking – around a 40% drop

    That’s huge. As internet users, our reliance on google.com being up is huge. It’s also of note that pageviews spiked shortly afterwards, as users managed to get to their destination.

    Reply
  30. Tomi Engdahl says:

    Teens really realize what social media can reveal

    Contrary to popular belief, teens care about their privacy online. The Pew Research Center study 12 to 17-year-olds perceptions of online privacy.

    The study also revealed that 70 percent of teens have asked for advice to manage their privacy. Many adults may be surprised that the parents and the guys are just as important advisors.

    42% asked for advice from friends online privacy manage
    41% of parents
    37% said his sister or serkultaa
    13% sought information on the website
    9% said the teacher
    3% asked from some outside

    12 to 13 per cent of girls aged 77 has asked for privacy management advice, the boys, the figure is 66 per cent.

    “Privacy Settings are easy., I think that they [Facebook] modify them often, reset or something. So they have to constantly update themselves,” said one of the study participated in the 13-year-old son.

    Source: http://www.tietoviikko.fi/kaikki_uutiset/teinit+kylla+tajuavat+mita+somessa+voi+paljastaa/a922250

    Reply
  31. Tomi Engdahl says:

    Most of U.S. Is Wired, but Millions Aren’t Plugged In
    http://www.nytimes.com/2013/08/19/technology/a-push-to-connect-millions-who-live-offline-to-the-internet.html?pagewanted=all&_r=0

    The Obama administration has poured billions of dollars into expanding the reach of the Internet, and nearly 98 percent of American homes now have access to some form of high-speed broadband. But tens of millions of people are still on the sidelines of the digital revolution.

    Mr. Griffin is among the roughly 20 percent of American adults who do not use the Internet at home, work and school, or by mobile device, a figure essentially unchanged since Barack Obama took office as president in 2009 and initiated a $7 billion effort to expand access, chiefly through grants to build wired and wireless systems in neglected areas of the country.

    Administration officials and policy experts say they are increasingly concerned that a significant portion of the population, around 60 million people, is shut off from jobs, government services, health care and education, and that the social and economic effects of that gap are looming larger. Persistent digital inequality — caused by the inability to afford Internet service, lack of interest or a lack of computer literacy — is also deepening racial and economic disparities in the United States, experts say.

    Seventy-six percent of white American households use the Internet, compared with 57 percent of African-American households

    The figures also show that Internet use over all is much higher among those with at least some college experience and household income of more than $50,000.

    Low adoption rates among older people remain a major hurdle. Slightly more than half of Americans 65 and older use the Internet, compared with well over three-quarters of those under 65.

    The percentage of people 18 years and older in the United States who have adopted the Internet over the past two decades has grown at a rate not seen since the popularization of the telephone, soaring nearly fivefold, from 14 percent in 1995. Although that growth slowed in more recent years, it had still moved close to 80 percent of the population by the beginning of the Obama administration in 2009, according to several academic and government studies.

    Researchers say the recent recession probably contributed to some of the flattening in Internet adoption, just as the Great Depression stalled the arrival of home telephone service. But a significant portion of nonusers cite their lack of digital literacy skills as a discouraging factor.

    The Federal Communications Commission and some Internet providers have started programs to make Internet service more affordable for low-income households.

    Reply
  32. Tomi Engdahl says:

    Required to use SNMPv3?
    http://www.dpstele.com/dpsnews/snmp_snmpv3_mediation_equipment.php?article_id=58829&m_row_id=1999640&mailing_id=10478&link=S&uni=229765212f79946b46

    How much SNMP v1 & v2c equipment do you have in your network? At most companies, it’s a big number.

    Despite the obvious advantages of an open standard, early versions of SNMP (v1 & v2c) were not built with security in mind. This poses a big challenge today if you work at a security-conscious organization, like a utility or government entity, that now requires encrypted SNMPv3.

    If you’ve got a lot of SNMP v1 & v2c equipment in your network combined with a requirement to use only secure SNMPv3, it may seem like you’ve been given 2 incompatible goals:

    1. Stop using all of your SNMP v1 & v2c equipment.
    2. Don’t spend budget money on new SNMPv3 equipment.

    What you need is a small (and relatively inexpensive) box that will mediate SNMP v1 & v2c traps to encrypted SNMPv3.

    Your older SNMP equipment will send unencrypted traps only as far as the local NetGuardian (not across the wider network). The NetGuardian will convert the trap to SNMPv3 and send it back to your central SNMP manager.

    Reply
  33. Tomi Engdahl says:

    ENISA, reveals some surprising reasons why millions of net users are suffering from last year’s communications are disrupted.

    The greatest harm caused problems do not suddenly joined the online crime, but it’s pretty traditional computer errors. This is reflected in network security ENISA, the European Union, to improve the authority’s report , which says the network operation last year.

    Network congestion problems were overwhelming, as measured by how many users each case on average touched. The average was up to 9.4 million users. Software Errors placed second in 4.3 million users per case and power failures third with 3.1 million users.

    Launch cyber-attacks reached after the first four, therefore, for an average of “only” about 1.8 million users per case.

    ENISA, the hardware failure was clearly the most common cause of network problems, and broken switches were mostly to blame for the problems.

    Also nature to take its toll. A number of countries hit by the storm caused widespread power outages that led to a prolonged, many mobile phone mast exhaustion and mobile communications interruption.

    Source: http://www.digitoday.fi/yhteiskunta/2013/08/20/patkiiko-netti-tassa-syyt/201311522/66

    Reply
  34. Tomi Engdahl says:

    ENISA Annual Incident Reports 2012
    https://www.enisa.europa.eu/activities/Resilience-and-CIIP/Incidents-reporting/annual-reports/annual-incident-reports-2012/

    This report provides an overview of the process and an aggregated analysis of the 79 incident reports of severe outages of electronic communication networks or services which were reported by national regulators during 2012.

    Reply
  35. Tomi Engdahl says:

    For example, Germany is running 4.0-Industrie program , under which “all machinery, equipment, and processes are connected to the Internet, and they can be controlled by one titles, the most appropriate tools.”

    New technology and networking is much cheaper produce real-time consumer preferences. Intelligent factories are calculated in a global liiketoimintapotentiaaliksi $ 1.95 trillion by 2020 .

    The Economist magazine ‘s third industrial revolution appointed by the trend is the same as the networking that combines Internet-based factory networks, operations management and organizational decision-makers – which also includes its largest customers before and after the purchase transaction itself.

    This technology, based on the intelligence needs-based production requires networking, strong technological know-how and new business models.

    High-end manufacturing requires intelligence, as people, machinery to process.

    Source: http://www.tietoviikko.fi/cio/elamantaparemontti/a922613

    Reply
  36. Tomi Engdahl says:

    HTTP 2.0 Will Be a Binary Protocol
    http://tech.slashdot.org/story/13/07/09/1455200/http-20-will-be-a-binary-protocol?sdsrc=popbyskid

    “Unlike previous versions of the HTTP protocol, this version will be a binary format, for better or worse. However, this protocol is also completely optional”

    Reply
  37. Tomi Engdahl says:

    These are places where a hacker can strike in the car

    Cars more complex information technology to raise the risk of hacking. Vulnerable places, including the control system and brakes.

    Modern cars can be dozens of embedded computers with the reliability and safety testing is more demanding. This has placed an Oulu-based security testing tools developer Codenomicon to the company.

    “The vulnerabilities have been found. Car manufacturers and component suppliers that they correct for the most part in quality control, “Chief Technology Officer Ari Takanen says.

    The concern comes from the fact that the entertainment system has been established automotive automation can-bus (Controller Area Network). Physical can-bus is a very standard implementation, but the application data vary manufacturer.

    “Software is detected errors. Bluetooth command has been, for example to turn off the car. ”

    Codenomicon is working on a number of car manufacturers, but Takanen can not be named manufacturers or models.

    The feel of car hacking was in the summer of Def Con hacker-21 event in Las Vegas. IOActive security company, researchers Charlie Miller and Chris Valasek introduced the Ford Escape and Toyota Prius can-bus tampering bus connected to the computer.

    The cars in information technology has been broken into several years ago, but Miller and Valasek revealed a 110-page report, the importance of the service bus. Almost any system for the brakes, the car gas, turning the steering wheel, and security systems can gain control of the computer.

    Strong authentication is not built, because the management of the bus should be able to just connect the machine physically connected to, for example between the front seats in the bus.

    Hyppönen says should not attack threatens to grow up as long as the car is inside receive a physical device.

    Takanen estimates that the economic benefits rather than the risks at this stage can involve vandalism or safety.

    Source: http://www.3t.fi/artikkeli/uutiset/teknologia/naihin_paikkoihin_hakkeri_voi_autossa_iskea

    Reply
  38. Tomi Engdahl says:

    US court rules IP address cloaks may break law
    ‘Published’ might not mean ‘available to anyone’
    http://www.theregister.co.uk/2013/08/20/us_court_rules_ip_address_cloaks_can_violate_cfaa/

    If you’re a normal Internet user, you probably think you have the right to access anything that’s put before the public. Not any more, at least in America, where the Computer Fraud and Abuse Act has been invoked to support a user-specific ban on accessing a Website, and in which the use of a proxy to circumvent a block has been ruled illegal.

    So, while not ruling all proxy use illegal, the case certainly opens a crack for wider bans on IP address cloaking. Because Craigslist had instructed 3Taps that it was not authorised to access its Website, and because that instruction was specific to 3Taps, its use of proxies constituted “circumvention” in the language of the CFAA.

    However, an ordinary user – say, an Australian that took the advice of the Australian Consumers Association and used IP spoofing to get around geographically-specific software pricing – would not be in the same position. Such a user would be breaking a site’s terms and conditions, which judge Breyer said did not violate the CFAA, at least until a specific ban were instituted by the site.

    Reply
  39. Tomi Engdahl says:

    Technology Leaders Launch Partnership to Make Internet Access Available to All
    http://newsroom.fb.com/News/690/Technology-Leaders-Launch-Partnership-to-Make-Internet-Access-Available-to-All

    Mark Zuckerberg, founder and CEO of Facebook, today announced the launch of internet.org, a global partnership with the goal of making internet access available to the next 5 billion people.

    “There are huge barriers in developing countries to connecting and joining the knowledge economy. Internet.org brings together a global partnership that will work to overcome these challenges, including making internet access available to those who cannot currently afford it.”

    Today, only 2.7 billion people – just over one-third of the world’s population — have access to the internet. Internet adoption is growing by less than 9% each year, which is slow considering how early we are in its development.

    The goal of Internet.org is to make internet access available to the two-thirds of the world who are not yet connected, and to bring the same opportunities to everyone that the connected third of the world has today.

    The founding members of internet.org — Facebook, Ericsson, MediaTek, Nokia, Opera, Qualcomm and Samsung — will develop joint projects, share knowledge, and mobilize industry and governments to bring the world online. These founding companies have a long history of working closely with mobile operators and expect them to play leading roles within the initiative, which over time will also include NGOs, academics and experts as well. Internet.org is influenced by the successful Open Compute Project, an industry-wide initiative that has lowered the costs of cloud computing by making hardware designs more efficient and innovative.

    In order to achieve its goal of connecting the two-thirds of the world who are not yet online, internet.org will focus on three key challenges in developing countries:

    Making access affordable

    Using data more efficiently

    Helping businesses drive access

    By reducing the cost and amount of data required for most apps and enabling new business models, Internet.org is focused on enabling the next 5 billion people to come online.

    Facebook, Ericsson, MediaTek, Nokia, Opera, Qualcomm, Samsung and other partners will build on existing partnerships while exploring new ways to collaborate to solve these problems.

    The Internet.org website launches today and provides an overview of the mission and goals, as well as a full list of the partners. In the coming weeks, it will feature interviews with technology leaders and experts, along with the latest news on Internet.org activities.

    Reply
  40. Tomi Engdahl says:

    Your iPhone Uses More Energy Than A Refrigerator? Controversial New Research Spurs Debate
    http://www.huffingtonpost.com/2013/08/20/iphone-energy-refrigerator-controversial-study_n_3782211.html

    Is your iPhone using more energy annually than your fridge?

    That’s the surprising — and increasingly controversial — claim laid out by Digital Power CEO Mark P. Mills in his new paper, “The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure, and Big Power.”

    From the paper:

    Reduced to personal terms, although charging up a single tablet or smart phone requires a negligible amount of electricity, using either to watch an hour of video weekly consumes annually more electricity in the remote networks than two new refrigerators use in a year. And as the world continues to electrify, migrating towards one refrigerator per household, it also evolves towards several smartphones and equivalent per person.

    The claim is based on a smartphone’s total energy usage per year, meaning Mills’ conclusion takes into account the sum of energy used for wireless connections, data usage and battery charging.

    In an email to TIME, Luke wrote:

    Last year the average iPhone customer used 1.58 GB of data a month, which times 12 is 19 GB per year. The most recent data put out by a ATKearney for mobile industry association GSMA (p. 69) says that each GB requires 19 kW. That means the average iPhone uses (19kw X 19 GB) 361 kwh of electricity per year. In addition, ATKearney calculates each connection at 23.4 kWh. That brings the total to 384.4 kWh. The electricity used annually to charge the iPhone is 3.5 kWh, raising the total to 388 kWh per year. EPA’s Energy Star shows refrigerators with efficiency as low as 322 kWh annually.

    However, not everyone is as convinced by Mills’ report, which was sponsored by the National Mining Association and the American Coalition for Clean Coal Energy. MSN News evaluated the study and found Mills’ claim about iPhone energy “false.”

    The MSN article claims that Mills is actually recycling similar research he published in 2000 about the amount of energy needed to power a Palm Pilot vs. a refrigerator.

    Reply
  41. Tomi Engdahl says:

    iPhone doesn’t really use more power than fridge
    But scientists are bitterly divided over the smartphone’s carbon footprint
    http://www.marketwatch.com/story/iphone-uses-100-times-less-electricity-than-fridge-2013-08-20

    Charging an iPhone consumes 100 times less electricity than a fridge, researchers say, but a study released last week suggests that the smartphone’s hunger for power is far greater than that of even the biggest kitchen appliance.

    And although a smartphone only costs about 500 kwh to manufacture — versus 1,000 kwh for a fridge — Mills contends that the annual energy allocated to making each smartphone is up to three times greater because, according to the Electric Power Research Institute, most fridges last 18 years and consumers update phones every three to five years. “When you buy a phone, you are also paying for the cost to build it,” Mills says. (His report “The Cloud Begins With Coal” is sponsored by the National Mining Association and American Coalition for Clean Coal Electricity.)

    The irony is that smartphones need only a minuscule amount of electricity to charge. It takes around 3.3 kilowatt hours per year to charge an iPhone 4, according to the Electric Power Research Institute, a nonprofit national energy research organization; that’s less than the 400 to 450 kwh per year it takes to charge the average family refrigerator. To put that in perspective: An iPhone costs around 38 cents per year to charge, based on one single charge per day, the EPRI found, compared with $65.72 per year for a refrigerator. (A desktop computer, by comparison, costs $28.21 a year to charge.)

    The same is true for Samsung’s smartphone.

    However, the carbon footprint also depends on whether people are using cellular or Wi-Fi, some researchers say.

    Another variable that puts scientists at odds with each other: Researchers also argue about the amount of energy being used per gigabyte of data. Mills calculated use at 2 kWH per gigabyte on the cellular network, based on a 2012 European-wide study, “How Much Energy is Needed to Run a Wireless Network.” A 2013 “The Mobile Economy” report by management consultancy A.T. Kearney says the energy use per gigabyte could run as high as 19 kwH.

    Reply
  42. Tomi Engdahl says:

    Andrew’s DAS technology keeping football fans connected
    http://www.cablinginstall.com/articles/2013/08/football-stadium-das.html

    In a blog post, CommScope’s director of business development for distributed coverage and capacity solutions in North America, Patrick Lau, reported that the company has deployed its distributed antenna system (DAS) technology “in about 30 U.S. football stadiums over the last year or so.”

    “Preparing for the massive spike in mobile data usage when tens of thousands of fans gather in one location is … extremely important.” He further explains that a DAS “increases capacity by sectorizing the stadium and offloading capacity from the macro network. A properly deployed DAS solution enables fans to continue posting photos to social media outlets, texting and calling friends, and utilizing apps as part of their overall experience.”

    As we reported earlier this year in an article titled “In DAS deployments business issues are critical,” defining a user’s needs and requirements is also critical. “It is essential to clearly understand the defined areas of coverage,” we reported then, and is still true today.

    Reply
  43. Tomi Engdahl says:

    802.11ac Wi-Fi access points seen gaining traction
    http://www.cablinginstall.com/articles/2013/08/wifi-11ac-gaining-traction.html

    ABI Research reports that worldwide consumer Wi-Fi customer premises equipment (CPE) shipments surpassed 43.3 million at the end of Q1 of 2013, a 16.8% increase from Q4 of 2012. According to the firm’s Wi-Fi Equipment market study, as for devices with the very latest Wi-Fi standard, 802.11ac, which started to enter the market in late 2012, a total of 0.2 million consumer 802.11ac Wi-Fi access points (APs) shipped in Q1 of 2013.

    “802.11n device shipments still dominate the market, accounting for more than two thirds of total device shipments,” comments Jake Saunders, VP and practice director of forecasting at ABI. “However, 802.11ac access point (AP) adoption is starting to gain traction.”

    “The 802.11ac protocol enables speeds up to 1.3 Gbps as well as better coverage than 802.11n,” adds Khin Sandi Lynn, industry analyst for ABI. “ABI Research expects that 1 million of 802.11ac consumer access points will be shipped by the end of 2013.”

    Additionally, according to ABI, the very first 802.11ad (WiGig) capable products are likely to enter market around the end of this year. The new 802.11ad Wi-Fi standard uses the 60 GHz band, delivers speeds up to 7 Gbps, and was approved by IEEE in early 2013.

    In the SOHO/Consumer Wi-Fi equipment market, TP-Link maintains the dominant market share with 15%, followed by Netgear and D-Link, with 12% and 11% respectively. In the enterprise WLAN sector, ZTE has the largest market share, accounting for 39% of total access point shipments, while Cisco holds the second largest market share of 26%, and HP networking ranks third as it overtook the market share of Aruba Networks as of late 2012.

    Reply
  44. Tomi Engdahl says:

    Intel Talks about Multimode LTE Modems – XMM7160 and Beyond
    by Brian Klug on August 20, 2013 8:35 PM EST
    http://www.anandtech.com/show/7234/intel-talks-about-multimode-lte-modems-xmm7160-and-beyond

    Since acquiring Infineon’s wireless division and forming the Mobile Communications Group, Intel has been relatively quiet about its modem portfolio and roadmap. Pre-acquisition parts from Infineon have continued to see broad adoption in the 2G (Nokia Asha phones) and 3G market, like XMM6260 and XMM6360 which was in the international version of the Galaxy S 4. However, Intel has been relatively silent about its multimode LTE offering, XMM7160, since talking about it MWC.

    The timing of the event was interesting since the Galaxy Tab 3 10.1 will be the first tier–1 product to launch with Intel’s XMM7160 LTE modem inside, although Intel has been quick to point out that XMM7160 was used in a single mode LTE manner in another prior device.

    So first up is Intel’s XMM7160 multimode 2G (GSM/EDGE), 3G (HSPA+) and 4G (LTE) modem, which is shipping this August in the Galaxy Tab 3 10.1 and apparently a few other devices. I don’t expect this to show up in many phones, but obviously tablets and other devices built on Intel’s SoC platforms are obvious places. This is Intel’s first multimode LTE modem, and is a UE Category 3 part at launch (100 Mbps downstream) but will receive an upgrade to Category 4 (150 Mbps) via a firmware update in the December timeframe.

    Reply
  45. Tomi Engdahl says:

    Quick-start guide to Fiber-to-the-Antenna (FTTA)
    http://www.cablinginstall.com/articles/2013/08/jdsu-ftta-guide.html

    JDSU recently published its detailed PDF resource: A Quick Start Guide to Fiber-To-The-Antenna (FTTA). The comprehensive-seeming guide concerns itself with installation and maintenance testing in certifying next-generation FTTA cabling and components.

    “Staggering increases in bandwidth demand are forcing network operators to new models of mobile infrastructure like fiber-to-the-antenna (FTTA) to improve user experience and reduce costs,” states the guide’s introduction.

    A Quick Start Guide to Fiber-To-The-Antenna (FTTA)
    Installation and Maintenance Testing
    http://www.jdsu.com/ProductLiterature/Quick-Start-Guide-FTTA.pdf

    Reply
  46. Tomi Engdahl says:

    What You Need to Know on New Details of NSA Spying
    http://online.wsj.com/article/SB10001424127887324108204579025222244858490.html

    Today’s report in The Wall Street Journal reveals that the National Security Agency’s spying tools extend deep into the domestic U.S. telecommunications infrastructure, giving the agency a surveillance structure with the ability to cover the majority of Internet traffic in the country, according to current and former U.S. officials and other people familiar with the system.

    The information here is based on interviews with current and former intelligence and government officials, as well as people familiar with the companies’ systems.

    Although the system is focused on collecting foreign communications, it includes content of Americans’ emails and other electronic communications, as well as “metadata,” which involves information such as the “to” or “from” lines of emails, or the IP addresses people are using.

    At key points along the U.S. Internet infrastructure, the NSA has worked with telecommunications providers to install equipment that copies, scans and filters large amounts of the traffic that passes through.

    This system had its genesis before the attacks of Sept. 11, 2001, and has expanded since then.

    Reply
  47. Tomi Engdahl says:

    Why Web TV Skeptic Mark Cuban Thinks Google Can Make the NFL Work on the Web
    http://allthingsd.com/20130821/why-web-tv-skeptic-mark-cuban-thinks-google-can-make-the-nfl-work-on-the-web/

    If Google ends up getting the rights to stream NFL games over the Web, could the Web handle it?

    That is: Is America’s Internet infrastructure capable of letting millions of people watch the same football games, at the same time, while delivering a TV-quality picture?

    We’ve seen hints that the Web is now up to the challenge, but for now we don’t really have an answer. We won’t know until someone tries.

    Cuban, as you may recall, got into Web streaming way back in Web 1.0, and became a billionaire after he sold his Broadcast.com to Yahoo.

    Fast-forward to today, and Cuban is pouring a lot of resources into conventional TV, via his HDNet/AXS TV venture. He has also been a frequent skeptic about the limits of YouTube specifically and Internet video in general.

    Surprise! Cuban thinks the Web, and Google, are capable of delivering NFL games to your TV.

    Reply
  48. Tomi Engdahl says:

    Yahoo #1 Web Property Again In US, First Time Since Early 2008 [Updated with comScore Statement]
    http://marketingland.com/yahoo-1-again-not-there-since-early-08-56585

    Reply
  49. Tomi Engdahl says:

    SMS to Shell: Fuzzing USB Internet Modems : Analysis of Autoupdate features
    http://www.garage4hackers.com/blogs/8/sms-shell-fuzzing-usb-internet-modems-analysis-autoupdate-features-1083/

    This is a continuation from my previous(main) blog post http://www.garage4hackers.com/blogs/…t-modems-1082/ where I explained the security issues with USB internet modems. And this is a second part where I would dissect the autoupdate feature of these devices, mainly because we noticed that the costumers were never getting security updates.

    Any way Huawei was very keen in finding more bugs and fixing there products, so many thanks to them and I could not find a security response service for disgisol.

    Reply
  50. Tomi Engdahl says:

    SMS to Shell: Fuzzing USB Internet Modems
    http://www.garage4hackers.com/blogs/8/sms-shell-fuzzing-usb-internet-modems-1082/

    Research focused on widely used products/services is of high importance because of the large attack and impact surface it provides to the attacker . This blog focus on an innovative new attacks surface [USB Data Modems] because of the the large impact surface .

    We would not be releasing the POC exploit which we have found on various modem devices for another 3 months, mainly because there is no autoupdate mechanism available on these modems. Even though I was not able to make a highly sophisticated exploit I have come up with POC codes to demonstrate the damages. And a highly skilled exploit writer could make all the devices out there vulnerable to these attacks. So once this blog is published I will request all the device vendors to enable/add an auto update mechanism on these device and push the patches to their costumers.

    Reply

Leave a Reply to Tomi Cancel reply

Your email address will not be published. Required fields are marked *

*

*