Sunday, July 31, 2011

Ofcom: ADSL Actual Download Speed is Only a Third of Advertized Speed

 
The "up-to" term used by ISP is proved, once again, to be different from the actual speed provided to subscribers, even on DSL networks (as oppose to DSL carrier usual claims when compared to shared media access services, such as cable or wireless). 

Ofcom [UK regulator] published new ststics on UK boradband speeds. "UK consumers are benefiting from a boost to broadband speeds .. The average UK broadband speed increased by 10 per cent in six months – from 6.2Mbit/s in November/December 2010, to 6.8Mbit/s in May 2011 .. But the gap between actual speeds and advertised (‘up to’) speeds has also increased .. The average advertised speed in May 2011 was 15Mbit/s, 8.2Mbit/s higher than average actual speeds of 6.8Mbit/s".

"Today’s research found that superfast services offer significantly faster speeds than copper ADSL broadband, with much smaller differences – or no difference – between headline speed claims and actual speeds .. However, over 75 per cent of UK residential broadband connections are currently delivered by copper ADSL telephone lines.  The research found that the average download speed received for ADSL ‘up to’ 20Mbit/s and 24Mbit/s ADSL services was 6.6Mbit/s, and more than a third of customers (37 per cent)on these packages received average speeds of 4Mbit/s or less"


See "Consumers benefit from UK broadband speed surge" - here, full document - here and previous report - here.

[Update 44: ALU- Flash Networks] PCRF - DPI Compatibility Matrix

    
This time "with a twist" as the PCEF device, from Flash Networks, is not a device focused on traditional DPI/traffic shaping, but mainly on video optimization (see "Flash Networks Shows Bandwidth and Energy Consumption Savings" - here) and content filtering.

Flash Networks announced that ".. it has successfully completed interoperability testing of its Harmony Mobile Internet Services Gateway with the Alcatel-Lucent 5780 Dynamic Services Controller (DSC), [covered here] a 3GPP policy charging and rules function (PCRF) platform"

As Alcatel-Lucent has a number of video optimization products and partnerships (here, here and here) , I guess this is a result of a specific project - a joint customer asking for integration of the two technologies.

"By enforcing policies from the Alcatel-Lucent 5780 DSC, Flash Networks’ Harmony Gateway enables:
  • Mobile data and video optimization based on subscriber profiles, service plans, and cell congestion
  • Policy-based and configurable content control for safe network browsing, according to user profile and time of day
  • End-to-end quality of experience management, actionable analytics, and monetization opportunities"
See "Flash Networks Announces Interoperability with the Alcatel-Lucent 5780 Dynamic Services Controller" - here.

Saturday, July 30, 2011

NTT: 30M Fixed and 130M Wireless Broadband Subscribers

 
An interesting slide from NTT's 2011 July report (here) shows the number of subscribers they have, by access method - DSL, FTTH, Cable, Mobile, Wi-Fi and Wi-Max all enjoying  very high access speeds:

AT&T to Throttle Top 5% of Unlimited Subscribers

 
While AT&T quickly moves to usage-based billing (see "AT&T: Usage-Based Pricing is a Success" - here), had "More than 15 million subscribers on tiered data plans" - here) and a nice ARPU increase in Q2 (see chart), it still sees the need for implementing a new throttling policy.

The carrier announced it will do so starting October 1, for "a very small minority of smartphone customers who are on unlimited plans - those whose extraordinary level of data usage puts them in the top 5 percent of our heaviest data users in a billing period

Currently AT&T's mobile broadband service plan, DataConnect, "is not an unlimited plan" and charges for overage ($10/GB or more, see chart below for 3G/4G plans - other plans exist for tablets).

"Starting October 1, smartphone customers with unlimited data plans may experience reduced speeds once their usage in a billing cycle reaches the level that puts them among the top 5 percent of heaviest data users. These customers can still use unlimited data and their speeds will be restored with the start of the next billing cycle. Before you are affected, we will provide multiple notices, including a grace period .. The amount of data usage of our top 5 percent of heaviest users varies from month to month, based on the usage of others and the ever-increasing demand for mobile broadband services .. Using Wi-Fi doesn't create wireless network congestion or count toward your wireless data usage".

"But even as we pursue this additional measure, it will not solve our spectrum shortage and network capacity issues. Nothing short of completing the T-Mobile merger will provide additional spectrum capacity to address these near term challenges"


See "An Update for Our Smartphone Customers With Unlimited Data Plans" - here.

See also "Sprint Will Throttle Virgin Mobile Users Exceeding 2.5GB" (also in October) - here and "Verizon's UBB Starts July 7"- here.


Friday, July 29, 2011

Infonetics: Optimization and QoE Drive CDN; Migration to IPv6 is Happening

 
A new research by Infonetics Research finds that "The percentage of service providers deploying content delivery networks (CDNs) is growing from 38% this year to 50% by 2013".

The research, by Michael Howard (pictured), co-founder and principal analyst, provides the main drivers for CDN use (see chart), and notes that "Though the industry has been talking about IPv6 for over a decade, it’s finally enjoying a quiet evolution, with 83% of the service providers we interviewed already deploying IPv6 or planning to by next year, and all have plans to migrate

See "Majority of carriers plan IPv6 in 2011; caching, CDN critical to reduce traffic loads" - here.


Google Optimizes Web sites and Brings CDN Serivce to the Masses

   
Google announced (in a blog post) a new service, dubbed "Page Speed Service" that "..  fetches content from your servers, rewrites your pages by applying web performance best practices, and serves them to end users via Google's servers across the globe .. In our testing we have seen speed improvements of 25% to 60% on several sites. But we know you care most about the numbers for your site .. At this time, Page Speed Service is being offered to a limited set of webmasters free of charge. Pricing will be competitive and details will be made available later".

Rewriting (page optimization) includes: Combine CSS, Combine JavaScript, Optimize Images, Resize Images, Move CSS to Head and Proxy Images.
 
When I tested my small, Google hosted, web site (http://www.ronen-cs.com/) the results were not that impressive..

See "Page Speed Service - Web Performance, Delivered" - here and product page - here.

Thursday, July 28, 2011

Sandvine's CTO Discusses Recent Wins

   
Earlier this week, Sandvine announced 5 new customers (see "Sandvine Wins Five New Customers" - here)  - "three wireless operators and two DSL access providers .. The customers are located in the United States, the United Kingdom, South Africa, Central America and the Caribbean".

One of the new customers is said to be using Sandvine solution for "VoIP Quality of Experience, to ensure subscribers’ satisfaction by monitoring the quality of experience of their VoIP calls".

In addition, "Three of the new customers purchased Sandvine’s new Policy Traffic Switch 22000 (PTS 22000) platform, which was announced earlier this year (here)"

In order to learn more about these issues, I spoke with Don Bowman (picture), Sandvine's CTO.

  • According to Don, the "VoIP customer" (from South Africa), is using Sandvine to measure the quality (latency, jitter etc) of the different VoIP services on its network - its own and competitive OTT services. While there are customers that are also using Sandvine's gear to control the QoS of VoIP services, in this case the customer is only measuring the quality.
      
  • The success of the recently announced PTS 22000 is due to its form factor and price/performance numbers. Operators are looking to save space and power and therefore see this model as particularly fitting their needs.  
  • 
    PTS 22000
    
  • Until recently, the common location for DPI devices in wireless networks was on the SSGN Gi interface. The PTS 22000 (and other models as well) may be placed on other interfaces providing higher granularity in managing wireless traffic - due to the support of stacked tunnels, such as MPLS over GTP. According to Don, in 3G networks, most customers will place Sandvine on the Gn or Gp interfaces. On LTE, most will place it on the S5, S1 or S8 interfaces (see diagram below)


Wednesday, July 27, 2011

Allot: 32% of MNOs Employ Application-Aware Charging Models

Allot's MobileTrends Report presents (in addition to wireless traffic statistics) the results of "a survey of more than fifty mobile networks around the world. The information gathered is publicly available on operators’ websites" regarding wireless charging models used by MNOs.

Main findings are:
  • 32% of mobile operators employ application-aware charging models (see - Telefonica/MovistarTeliaSonera, MetroPCS
  • 89% of mobile operators employ volume charging models (too many examples ..)
  • 51% of mobile operators sampled do not offer ‘unlimited’ or ‘flat rate’ pricing plans
See "Allot MobileTrends Report Shows Significant 77% Growth in Mobile Data Bandwidth Usage in H1, 2011" - here. The report is available here (registration required).

See also "Recent Trends in Policy Control" - here.

Tesco[UK] Fair Use Policy - Pay More or be Terminated!

  
Tesco Broadband [UK] has a common traffic management policy for its fixed broadband service: "Tesco constantly monitors the way in which our customers use our broadband services. In particular, at peak times, we will look for (and restrict) non-time-critical traffic, such as Bit Torrent, other peer to peer file sharing applications and on-line storage services ,,  during the peak hours ..we may slow down specific services to make the shared usage of our network fair for all customers .. This policy ensures that we can deliver a great service to all our customers at all times and we never have to limit the customers who are using the internet for day to day time-critical transactions, such as normal surfing, e-mailing, on-line shopping and banking, using BBC/Sky iPlayer applications, gaming or making on-line phone calls via companies like Skype".

However, Tesco's fair use policy seems to be unique (here):

"We regularly monitor and review our customer’s collective and average monthly usage to set our fair usage limit (FUL) at a level that will not affect the majority (at least 95%) of our customers. Currently the FUL is set at 100GB per month.  If a customer regularly downloads in excess of the FUL, we take the following steps:
 

1. When we first notice that a customer has exceeded the FUL, we contact the customer to bring the matter to their attention. We will ask the customer to modify their use and/or give them the opportunity to move onto a ‘Super-user tariff’ (see our Price List for details). [I can’t find it there or elsewhere]

2. If the customer declines to move to the Super-user Tariff but continues to exceed the FUL for a further two consecutive months, we will suspend or terminate the customer’s service.
 

Tuesday, July 26, 2011

Allot Reports 93% Growth in Mobile Video Streaming; YouTube Leads

 
Allot published its semi-annual MobileTrends Report, showing "that mobile data bandwidth usage continued its steady rise with 77% growth during the first half (H1) of 2011 .. video streaming continued to show significant growth with a 93% increase, and remains the single largest application taking up bandwidth, accounting for 39% of mobile bandwidth .. YouTube remains the single most popular mobile Internet destination, accounting for 22% of mobile data bandwidth usage and 52% of total video streaming "

See "Allot MobileTrends Report Shows Significant 77% Growth in Mobile Data Bandwidth Usage in H1, 2011" - here. The report is available here (registration required).

Allot does not mention Netflix in the report, probably since Netflix is mainly used over fixed connections. Nevertheless, according to Sandvine, Netflix is the leading video application in North America - here - and it going now to Latin America and some countries in Europe.

[Update 43: Volubill - Cisco Joint Projects] PCRF - DPI Compatibility Matrix

 
The PCRF-DPI matrix has been updated with information received from Volubill, indicating its compatibility with Cisco's DPI equipment.

According to Volubill, the integrated solution was already implemented by 3 joint Cisco-Volubill customers: iBasis-USA (Cisco 7604), Globacom Ghana (Cisco CSG) and Network Norway (Cisco SCE; see also here).

I hear that another one will join the list soon.

Monday, July 25, 2011

[Calcalist]: Allot to Raise $70M; F5 Acquisition was Scrapped

  
Golan Hazani reports to Calcalist that "Allot Communications [NASDAQ:ALLT], which is traded on NASDAQ at a $428 million market cap, will hold a second financing round to raise $70 million. Calcalist has learned that the forthcoming offering will be held in upcoming weeks and that Tamir Fishman venture capital fund is expected to sell off a considerable portion of its holdings in the offering .. It is unclear whether additional shareholders will partake in the sale offer, however the offering comes after the negotiations for the acquisition of the company by F5 [here] foundered and the acquisition was scrapped .. In recent weeks, Allot has been enjoying a record market cap after its share soared to $18 following rumors of the acquisition. Negotiations with F5 continued for several months, but were brought to an end by the American communications equipment vendor".

See "Allot on its way to NASDAQ $70M offering" - here (English!!!).

Allot is heavily traded and down 5% in mid day Monday.

Research: YouTube Flow-Control Causes Packet-Loss and Re-transmission

  
A research by Shane Alcock and Richard Nelson (picture) from the University of Waikato, Hamilton, New Zealand analyzes YouTube's application flow control.

See "Application Flow Control in YouTube Video Streams" - here.

ABSTRACTThis paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40% of packet loss events in YouTube flows in a residential DSL dataset and the re-transmission of over 1% of all YouTube data sent after the application flow control began.

We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams.

The authors say that "We have presented these results to engineers at YouTube and their parent company, Google. They have acknowledged that this is a legitimate problem and are currently working on modifying the block sending algorithm to be less bursty. We believe that this will offer improved YouTube performance for users and reduce YouTube’s bandwidth requirements. The largest improvements will be seen by YouTube clients using congested connections, but well-connected clients should also see some benefit".

[Guest Post]: Can Data Optimization Find its Way to Backbone Networks?

By Dr. Yair Shapira*, VP Marketing & Business Development, DiViNetworks

Bandwidth optimization by trading bandwidth for storage or processing power has been long debated, and proven beneficial in many scenarios where links are expensive. With the continuously growing hunger for bandwidth scalability of tier-1 backbones is becoming questionable. At over 40% YoY traffic growth data optimization slowly but surely finds its way from sporadic expensive links to mainstream backbones.

Apart from the financial question – proving that indeed optimization is less expensive than merely expanding bandwidth – introducing optimization into a network has its own challenges. After all, the backbone and access networks are preliminary designed to transfer data, not to modify data, nor to serve content, as many optimization technologies suggest.

This article briefly explores the main factors to take into consideration when seeking optimization solutions for backbone networks.

Supportability in times of Internet revolutions

Some optimization solutions suggest distributing their equipment in network nodes, achieving optimization by practically mimicking the original content server within the network, including its content, application and business logic.Think of dozens or even hundreds of nodes, distributed in critical network junctions all over the territory, practically manipulating protocols and content. Can such a system be stable? Will it keep up with the ever-changing Internet?

As opposed to service-core systems, backbone-based optimization systems should refrain from being application-, content- and protocol-aware. Otherwise, continuous pampering will be required and ongoing changes and maintenance of the optimization systems will be inevitable.

Sustainable performance over time

Even the most enthusiastic optimization supporters admit that optimization factors tend to erode with time. Most techniques, such as video and P2P caching, are way too sensitive to various Internet phenomena. The quickly evolving nature of Internet traffic introduces growing uncertainty in performance factors through time.

When calculating ROI for optimization solutions, make sure that the technology is really future-proof, and does not evaporate with every change in the Internet. A lasting optimization method must not be dependent on application, content or format, and should not be based on fragile trends.

Seamless integration with the traffic flow

One thing network operators strive to avoid is modifying their data flow for the sake of optimization. Limiting future changes is also something that operators are not keen on. Yet most optimization systems are ALGs – Application Level Gateways. As such they tamper with layers of communication which are not supposed to be interrupted within the network. Although often referred to as “transparent proxy”, their mere existence as an ALG limits the flexibility in traffic planning – asymmetric routing, link load balancing, tunneling etc.

Operators should thus strive to adopt optimization solutions, which operate at a network level, rather than at an application level.

Maintenance of core-network functionality

With the growing competition and the ongoing decline in ARPU, operators are heavily investing in smart functionality in the core network – traffic management, smart ad insertion, advanced charging, service selection, video optimization, protocol acceleration and more.

When applying certain optimization technologies, especially caching, down the network path, these functionalities are lost or compromised. A cascade of mechanisms and interfaces has to be constructed in order to compensate for the traffic not actually passing in the core. The result – heavy investments and revenue-generation techniques are nullified.

Bandwidth optimization mechanisms must, therefore, be designed to maintain the core-network functions – not by applying compensation mechanisms, which introduce complexity and require endless updates, but merely by leaving the traffic flowing through the core as is.

Co-existence with content providers

We are witnessing the accelerated tension and clashing between network operators and content providers. The operators claim content providers monetize on the former’s assets, whereas the content providers claim control over their content is hijacked by the operators. The operators, trying to minimize the load caused by OTT (Over the Top) traffic, seek optimization techniques, to the extent of serving the content locally using caching and telco-CDNs.

Yet, by locally serving content or manipulating content, the network operators interfere with the content providers’ business models – managing speeds, inserting ads, limiting session times and applying other business logic. Legal copyright aspects, and inconformity with standards directives, are also brought up in this tug war.

Optimization must therefore not jeopardize the already-fragile co-existence between network operators and content providers. Selected optimization methods must provide solid optimization factors for the operator on one hand, but maintain the content provider’s control over the content on the other.

And again, not by building a house of cards of interfaces to the different content providers and compensation mechanisms – content validity check, fake metering, speed sampling. Network optimization is not an antivirus – it should not be updated with every new web site, video format, or business logic in the Internet.

IP traffic coverage

One of the main elements of considerations for operators, when choosing an optimization solution, is how much of the operator’s traffic will be eventually addressed by the chosen optimization solution. Various techniques can demonstrate excellent savings for the traffic they handle, yet the portion of this traffic is low. All ALGs operate on specific portions of the IP traffic, and therefore apply to merely a part of the bandwidth.

Content providers, struggling to avoid caching and video compression, develop mechanisms to make life tough for OTT optimization solutions. Thus much of the Internet content is not handled by many optimization solutions. A recent study by a tier-1 provider showed that although caching can demonstrate over 30% hit-rate in theory, in actual traffic it provides merely 4% savings due to technical and legal considerations.

IP backbone networks handle the overall IP traffic, and refrain from fragmenting traffic according to its content or application. An optimization solution, to be deployed within the backbone network, must also provide a safety net for 100% of the IP traffic.

Scaling up

10, 40, 100Gbps links are already reality. Network nodes oversee ever-increasing flow of traffic. Optimization systems, deployed within the backbone network nodes, will need to crunch similar throughputs.

Alas, most optimization mechanisms cannot scale to such bandwidth. Many that can stack up to these bandwidths require excessive computational resources, and endless storage. Implementing such solutions, does not make any operational sense.

Optimization solutions in the Zetabyte era must be scaled to par with other networking equipment. A 10Gbps line should require no more than a 1RU device. 40 and 100Gbps down the road must be handled in a compact solution, or even plugged into existing networking equipment.

There is a limit to brute force scaling – merely throwing more ports and fiber; and we are rapidly reaching this limit. Data optimization, already a common reality in expensive long-haul lines, is becoming a must have in tier-1 backbone networks. Smarter ways to move data around will soon proliferate.

Yet, the serious barriers must be removed before introducing data optimization into backbone networks. What works for point-to-point few-Gbps links, will simply not work for hundreds-of-nodes multi-Tbps networks.

*Dr. Yair Shapira serves as DiViNetworks’ senior Vice President of Marketing & Business Development. DiViNetworks is a world-known provider of bandwidth optimization solutions, with dozens of commercially deployed systems within major backbone networks.

Dr. Shapira joined DiViNetworks in 2009, after serving as VP Marketing of Jungo (acquired by NDS). Prior to Jungo Dr. Shapira served as VP Business Development and CTO at Flash Networks, a leading provider of mobile optimization systems. Dr. Shapira also sat on the Board of Directors of Koor Technologies, an early-stage VC, and provided strategic and technological consulting services to various companies and VCs.

Dr. Shapira earned his B.A. in Mathematics and Physics from the Hebrew University, and earned his Ph.D. in Applied Math from the Technion.

Sunday, July 24, 2011

Blog News: Guest Posts

  
I am delighted to start a new "Guest post" section on my blog. Guest posts will consist of educational articles, by industry experts, related to the topics covered in my blog. They will provide original content on the latest market trends, technology, standardization, business aspects or similar subjects.

My first guest, Dr. Yair Shapira, will cover tomorrow the main factors to take into consideration when seeking optimization solutions for backbone networks.

If you like to propose an article for the new section, please send me a proposed subject, abstract and the author details.

NYU Polytechnic Researches Scale DPI with New Hardware

 
The Polytechnic Institute of New York University announced that ".. H. Jonathan Chao (pictured below), who heads the Department of Electrical and Computer Engineering at the Polytechnic Institute of New York University (NYU-Poly), and Industry Assistant Professor N. Sertac Artan have developed and patented a hardware solution to revolutionize this increasingly critical cybersecurity function".

See "New Hardware Solution Offers Cybersecurity Protection Well Before Malware or Virus Reaches Personal Computer" - here.

See more - "HIGH-SPEED NETWORK INTRUSION DETECTION AND PREVENTION"- here.

"Effective DPI examines every packet entering a router switch. Its contents are compared against an ever-growing catalogue of known viruses or attack signatures. With millions of packets arriving each second, the process is often accomplished by a network of processors running parallel searches on portions of data packets — an approach that doesn't scale well to high-speed traffic. Chao and Artan devised a scheme for consolidating the inspection process to a single node, compressing the catalogue of attack signatures to fit on one chip. This allows service providers to streamline their DPI strategy, using fewer resources without compromising efficacy or speed. With a prototype already developed, Chao and Artan are testing their solution with the goal of licensing the technology"

DPI Announcements: NetLogic and EZchip Collaborate to Achieve IPv6 100G Performance

 
Two of the leading vendors of network processors, EZchip (here) and NetLogic Microsystems announced that ".. the companies are collaborating to deliver the industry’s highest performance, merchant, packet-processing solutions for IPv6-ready Terabit class systems.  By optimizing and implementing exclusive operational modes in both EZchip’s NP-4 100Gbps network processor (NPU) and NetLogic Microsystems’ NL11k knowledge-based processor, the companies are enabling customers to achieve enhanced performance and functionality when using both the processors together when compared to alternative solutions .. the growing requirement for deep-packet inspection throughout the network is driving an unprecedented need for knowledge-based processors with significantly higher performance and database capacity .. The companies have achieved broad design success across leading Tier One OEMs adopting the 100G NP-4 NPU and the industry-leading NL11k knowledge-based processor .. This powerful 100G combined solution from NetLogic Microsystems and EZchip is available immediately".

See "NetLogic Microsystems and EZchip Collaborate to Deliver High-Performance Packet Processing Solutions for Terabit Class Systems" - here.

Saturday, July 23, 2011

[Analysys Mason]: SDP Market Slow Due to Delays to Real-time Charging Projects

   
A new report by Peter Mottishaw (pictured), Principal Analyst, Analysys Mason finds that "The service delivery platform (SDP) market generated USD3.63 billion in revenue in 2010, up 7% from USD3.38 billion in 2009 [here]. Growth was lower than expected because of delays to real-time charging projects in some emerging markets and to investments in mobile device management, caused by the slow growth in related services for communications service providers (CSPs)".

See "Service delivery platforms: worldwide market shares 2010" - here.

See also:
  • Infonetics: SDP Spending will Reach $5.2B in 2015; Oracle Leads - here 
  • [Current Analysis] SDP Market Portfolio Assessment: Ericsson Leads, Oracle Missing - here

Friday, July 22, 2011

Vodafone: "Traffic management limiting data volume growth to +31%"

  
Vodafone released Q2 results (see "Interim Management Statement for the Quarter ended 30 June 2011" - here) showing growth in data revenues of 24.5%, to £1.5B. 
  • Germany: Mobile internet customer growth and smartphone sales driving data +21.4%
       
  • Italy: Data revenue +18.9%; led by mobile internet +66%
     
  • Spain: Data revenue growth slowed to +8.9%; mobile broadband price cuts offset strong smartphone sales
     
  • UK: Data revenue growth +21.9% led by smartphone sales and data attach 82%
     
  • India: Data growth remains strong, +70% led by mobile internet
     
  • Vodacom South Africa: Data revenue growth +35%2; data users +37% to 9.6m

At the group level, Vodafone says (here): "Maintaining network quality: Traffic management limiting data volume growth to +31%" (Vodafone uses Allot, Tekelec and others for traffic management; See "Vodafone Uses DPI and Policy Management to Improve QoE" - here).

[IMS Research]: "telcos are actively seeking solutions to optimize bandwidth"

   
A recent IMS research finds that "telcos who are IPTV providers face substantial challenges adapting their networks to accommodate the onslaught of over-the-top (OTT) video. IMS Research estimates that in 2010 peak bandwidth utilization was 44 percent of capacity, and that the bandwidth usage per household is forecast to increase by more than 50 percent between 2010 and 2015 .. OTT subscription services will generate a cumulative $32 billion in revenues globally over the next five years .. bandwidth congestion challenges are more pronounced in countries with lower broadband penetration and correspondingly longer loop lines. The research firm expects Eastern European and Latin American DSL providers to struggle acutely with video-generated congestion issues".

According to John Kendall, Analyst, “What we have now is a situation where the telcos are actively seeking solutions to optimize bandwidth .. OTT is here to stay, and the telcos have accepted that .. using caching in the network [is one of the] solutions that are occurring right now, as telcos position themselves to meet the rapidly growing consumer OTT demand.  Even further, many operators are looking at deploying local content delivery networks (CDN) to keep their traffic local, reducing costs of bandwidth transit

See "OTT Video Sends Telcos Scrambling for Bandwidth Optimization Solutions" - here.

Thursday, July 21, 2011

Flash Networks Shows Bandwidth and Energy Consumption Savings

     
Flash Networks published the "results of live tests measuring the effect of data optimization on radio access network (RAN) resources, in a recent experiment conducted on a European operator’s HSDPA network".

"Flash Networks’ Web & Media Optimization solution reduced cell power consumption by 25%, throughput by 30%, timeslot usage by 20%, and code shortage by 30%. In addition, Flash Networks’ optimization successfully freed-up 40% of bandwidth on a fully-utilized cell, enabling the operator to service more subscribers with the same radio resources while providing them with better quality of experience".

See "Flash Networks Demonstrates 30% Reduction of Radio Resources Due to Data Optimization" - here.

Flash Networks' installed base includes O2, Orange, SingTel Group, T-Mobile, Telefónica, and Wind.

Procera Q2 Highlights - Competition, F5, Genband, Hiring Plans

  
Some highlights from Procera's quarterly call Q&A session, following the release of Q2 results (revenues and booking of $9.7M - see "Procera Networks Announces Second Quarter 2011 Results" - here):
  • Procera is going to hire 16 people during H2 - 5 in R&D, 11 in sales and marketing
     
  • The sales cycle shrinks - typically now 9-12 months. Some larger deals took 18 months to close
     
  • Of the 20 new service provider wins, 30% were displacement of competitors
  • Main traction is in the mobile market, main competition comes from Sandvine and Allot
  • Procera had 3 10% customers in Q2, with total contribution of 40% to revenues 
  • Management is positive about the F5 relationship (see "[Calcalist] Rumors - F5 to Buy Allot for $450-500M" - here)
     
  • Genband generated $165K during Q2; some of the 15 tier1trials are done by Genband. Management is hoping for a better H2.

Wednesday, July 20, 2011

Telefonica/Movistar Leads CALA’s Trend of Pricing by Application

  
Daniele Tricarico (pictured) reports from Informa’s Mobile VAS CALA event in Miami about new pricing models in Latin America:

"..Pricing by app is already starting in CALA with Telefonica taking the lead. Colombia was the first market  where the Spanish group launched  the ”paquetes de internet”, a number of social media, mobile email and Internet packages that range in price according to the amount of services a subscriber wants access to. A speaker from Movistar Chile at the event confirmed that this is the trend – evolving from access per MB and per hour to access per application – and anticipated that the “paquetes” will be soon extended to the Andean country".

See "Price discrimination by app is a hot topic at Mobile VAS CALA"  - here.
  
Telefonica uses Sandvine's DPI solutions (see "Sandvine Exposes Telefonica/O2 Use Cases" - here mentioning that "Telefonica recently introduced a menu of tiered pricing plans that accommodate subscribers' personalized network usage patterns and budgets ).  


"KPN has decided not to block any services or to set separate rates for different services"

 
After trying to impose a surcharge for the use certain mobile applications (here) and got in return the first European Net Neutrality law (here), KPN announced new mobile service plans.

"On September 5, KPN will introduce new mobile propositions for consumers in the Netherlands under its KPN and Hi brands. The propositions are based on the use of mobile data; they have a wider selection of data bundles and thereby respond to the latest trends in the Dutch telecom market .. The introduction of the new propositions means that .. Mobile data will become more expensive within the bundle .. KPN has decided not to block any services or to set separate rates for different services. The propositions comply with the forthcoming new Dutch legislation and are net neutral .. the consumer will pay more for a new smartphone because these devices are becoming increasingly expensive".

See "New Dutch mobile propositions KPN and Hi" - here.

Tuesday, July 19, 2011

Tekelec: Policy Management Business Cases (AT&T, Vodafone and Others)

  
A recent presentation by Tekelec shows the business case, with named examples (customers?), for using policy management:
  • QoS based tiers: Vodafone
     
       
  • Volume based tiers: AT&T - "If all subscribers moved to AT&T’s new tiered data plans, the company could have lost over $42 million per month. This was more than offset by the addition of 3.2 million new iPhone users in Q3 2010 when tiered services were introduced."
     
  • One customer, many devices: Rogers
     
  • Casual usage and loyalty program: - Claro, Telecom New Zealand, Vodafone Germany
     
  • OTT Monetization
See "Four Use Cases for Monetizing Mobile Broadband" - here.

See also "Tekelec: Policy Management Use-cases, Deployments and Performance" - here,


Survey: Mobile Networks are Near Full Capacity

   
Traditional network management practice says that network element usage level should not exceed 70% of its capacity. If it does - it is time to do something - buy more or manage it better. So, according to a recent Credit Suisse report - it is time to do something for wireless networks, globally. For North America, where current utilization at peak time reaches 80% it is even urgent.

 (pictured) reports to FierceWireless that - "Wireless networks in the United States are operating at 80 percent of total capacity, the highest of any region in the world, according to a report prepared by investment bank Credit Suisse. The firm argued that wireless carriers likely will need to increase their spending on infrastructure to meet users' growing demands for mobile data .. globally, average peak network utilization rates are at 65 percent, and that peak network utilization levels will reach 70 percent within the next year. .. 23 percent of base stations globally have capacity constraints, or utilization rates of more than 80 to 85 percent in busy hours, up from 20 percent last year .. In the United States, the percentage of base stations with capacity constraints is 38 percent, up from 26 percent in 2010"
   
Credit Suisse analyst Jonathan Chaplin (pictured) wrote "Wireless capex expectations may need to increase longer-term.. investors seem to expect capital intensity to start to decline in 2012 once LTE spending is largely complete. Investors may be underestimating the level of equipment spending that is required on an ongoing basis to support rapid growth in wireless data".

 

Monday, July 18, 2011

Sprint Will Throttle Virgin Mobile Users Exceeding 2.5GB

 
Will Sprint remain the only "truly unlimited postpaid mobile data plans in place" in the US? Maybe, but at least its prepaid brand, Virgin Mobile, won’t.

The carrier announced that "Beginning in October 2011, Virgin Mobile will also move to reduce data speeds [to 256 kbps - see below] when a customer’s data usage exceeds 2.5GB in a month but still provide unlimited 3G access without a contract, usage cap, overage or activation fees. Based on current usage patterns, fewer than 3 percent of Virgin Mobile USA customers use more than 2.5GB of data usage per month. After reaching this level, this minority of customers may experience slower page loads, file downloads and streaming media. When a customer’s next month begins, the data usage meter starts back at zero with unlimited 3G speeds .. Last week, Verizon pulled its unlimited data plan; T-Mobile and AT&T have also moved away from unlimited data usage. At this time, only Sprint, which owns Virgin Mobile USA, has truly unlimited postpaid mobile data plans in place".

See "Virgin Mobile New Beyond Talk Plans Offer Unlimited Data Plan With No Contract" - here.

David Trimble, vice president for Virgin Mobile USA said: “We are all facing the same situation and this is the best way for Virgin Mobile to maintain the best network experience as data usage explodes .. Our no-contract and postpaid competitors like Cricket, MetroPCS and T-Mobile have either implemented more stringent constraints and/or don’t disclose slower speeds and data caps. We believe this adjustment – with no hard cap or overage charges for more usage – gives the most value to the largest group of consumers. It’s important to Sprint and the Virgin brand that we be as up-front as possible with our customers.

Virgin Mobile's site explains [here] that "Effective in October [exact date TBD] , when any Beyond  Talk customers reach 2.5GB of data usage within a current monthly cycle, they could have their maximum throughput speeds limited to 3G speeds of 256 kbps or below for the remainder of that plan cycle"

Comcast Disconnects Over-Quota Customer for a Year

 
Hot Hardware tells the story of Andre Vrignaud (pictured), who exceeded Comcast's 250GB data cap in his $60/month plan, and was told that "that he's cut off for the next year - he can't even switch to an uncapped, higher-priced, lower-speed business connection. Comcast says it's his fault for not monitoring his bandwidth better".

See "Comcast Shuts Down Customer That Exceeded Bandwidth Cap" - here.

Charlie Douglas, Comcast Spokesman said: "If someone's behavior is such that it degrades the quality of service for others nearby -- that's what this threshold is meant to address .. It can negatively affect other people .. There's not much we can say. We called and reiterated the policy and told him if he did exceed it again in six months, he would face suspension. That is our policy".

Comcast FAQ page (here) answers the question- "What will happen if I exceed 250 GB of data usage in a month?":

"The vast majority - more than 99% - of our customers will not be impacted by a 250 GB monthly data usage threshold.  If you exceed more than 250 GB, you may receive a call from the Customer Security Assurance ("CSA") team to notify you of excessive use.  At that time, we will tell you exactly how much data you used.  When we call you, we try to help you identify the source of excessive use and ask you to moderate your usage, which the vast majority of our customers do voluntarily.  If you exceed 250 GB again within six months of the first contact, your service will be subject to termination and you will not be eligible for either residential or commercial internet service for twelve (12) months.  We know from experience that most customers curb their usage after our first call.  If your account is terminated, after the twelve (12) month period expires, you may resume service by subscribing to a service plan appropriate to your needs".

Sunday, July 17, 2011

ZTE's ZOOMs (DPI/PCRF) Solution Customer List

    
A year ago ZTE launched ZOOMs, a DPI/PCRF solution (covered here). The solution's web page has been updated recently, and now includes a list of ZTE customers deploying the ZOOMs:

"Till 2011Q2, ZOOMs has already been deployed in dozens of countries, including China Mobile, HongKong CSL [here], Indonesia PT Smart, Germany KPN E-PLUS, Montenegro Telenor, Portugal ZAPP, Malaysia DiGi, French OMT, etc.

See "ZOOMs: ZTE Optimized Operation and Management System" - here.

"ZOOMs solution includes the following 4 parts:
  • Intelligent Deep Packet Inspection Gateway xGW (inbuilt DPI)
  • Dynamic Policy and Charging Control (PCRF/SPR)
  • Intelligent User Behavior Analysis System (UBAS)
  • Mature Online/Offline Charging System (OCS/OFCS)
"Use dedicated DPI hardware index chip, to reduce performance degradation of GW system caused by DPI. GW performance degradation is less than 30% under DPI function. Use SPI, DPI, HPI and other packet inspection technologies. More than 300 types of protocols could be identified."

US: Not all Democrats Support Net-Neutrality (and Caching Kills it anyway)

  
The White House recently said that "The FCC has carefully crafted rules to promote competition while balancing the technical needs of Internet providers" (here) but it seems that not all democrats think the same.

In a recent article at the Huffington Post Tech section, Everett Ehrlich (pictured), former Undersecretary to Ron Brown, Mickey Kantor, and Bill Daley in the Clinton Administration, "can't for the life of me understand why my fellow-travelers want to impose this burden on the burgeoning broadband Internet" and claims that Net Neutrality is not needed, and the big content providers can override it anyway.

"Would you subscribe to an ISP that gave you Fox News but not Olbermann, or gave iTunes an exclusive on music, or only allowed Warner Brothers movies on their system? It's a ridiculous proposition (and one that could be addressed with anti-trust law if I'm entirely wrong, which I'm not) ... And, second, the Internet isn't "neutral" right now! Big websites cache their content in server farms around the world, like squirrels burying nuts for the winter. That way, they reach you faster than the "little guy," even though the net is allegedly "neutral."

See "Why Liberals Should Think Twice About Net Neutrality" - here.

For the 2nd point - see also the debate in the UK on BT's caching service - "BT's Wholesale Content Connect Service and Net Neutrality" - here and "BT CTO: "Caching does not Breach Net Neutrality" - here.