Trade Resources Industry Views Every Day New Applications Appear on The World's Mobile Communications Networks

Every Day New Applications Appear on The World's Mobile Communications Networks

Every day new applications appear on the world's mobile communications networks and user behaviour keeps adapting to new ways of streaming video, sharing files and communicating. 

New business demands, such as traffic monetisation, must be met, as well as growing legal pressures for net neutrality, data loss prevention and privacy legislation.

Operators are addressing these challenges by incorporating greater intelligence and creating “application-aware” networks. 

This is not a plug and play upgrade, it means leveraging the application intelligence over a whole range of network components – firewalls, Unified Threat Management (UTM) systems, Intrusion Prevention or Detection Systems (IPS/IDS), web and Email gateways, QoS-shaping edge routers, policy routers, mobile packet gateways, Deep Packet Inspection (DPI) engines and application delivery controllers.

MWC: Creating Application Aware Mobile NetworksIn addition to all this added complexity in the infrastructure, the sheer number and variety of applications, as well as security threats, presents an enormous challenge – both for the network operators and for the network equipment manufacturers.

Faced with the burgeoning complexity of these application-aware networks, how can one reliably and consistently test them? Is the application detection accurate? Are the policies correctly implemented? What is the optimal policy configuration? How quickly can we test policies for new apps? Are the adopted policies making the network more, or less, secure?

MWC: Creating Application Aware Mobile Networks_1The 3 key requirements for testing application-aware networks

Network traffic is not uniform in structure: thousands of different applications running on hundreds of different traffic protocols all share space at any time on the network, each subject to unexpected surges in demand. 

Convergence brings ever more applications into the picture – social networking, voice and rich media applications, cloud computing, video, smartphone traffic – and into this mix comes the growth of malicious attack traffic.

Modelling realistic traffic conditions means modelling this complex mix. So the first question is to know what protocols are likely on the network and their relative significance. 

Next we must decide how realistically the test can model them: it is one thing to describe a protocol in terms of its packet structure, but does the model realistically maintain the timing between packets? For example: an application runs on broadband much faster than a similar application from a mobile phone.

When more than 80% of your traffic is driven by applications, test tools that focus purely on protocols are inadequate, because they try to mimic applications by inserting random data in the application payload – you can’t test real networks with fake traffic.

Also, familiar applications are constantly updated so that, even if the application is the same, the actual traffic on the network could be very different – and scaling that traffic for volume testing could have a very different impact.

Applications are being released at an astonishing rate, and new attack threats keep emerging. It is no longer possible manually to keep up with these changes, download and configure real clients and servers to build realistic test scenarios. There has to be some way to keep up and continually adapt tests to match the latest traffic patterns.

When it comes to creating the actual test, many solutions are too inflexible and testers struggle to model today’s traffic – forcing them to rely on test vendors to take time and develop custom tests for them.

Existing tools typically address a single discipline – whether scale testing, security testing or functional test – and assets created by one are not usable by another, forcing you to waste time on duplicative tests. More hardware and more time and effort integrating results makes testing more complicated and labour-intensive, increasing the risk of testing being postponed or sidelined in favour of higher-profile business demands.

To summarise, then, three factors are vital for testing today’s application-aware networks:

• Accurate modeling of real life traffic
• Rapid adaptation to new applications, traffic protocols and cyber attacks
• Flexibility and simplicity in the actual test process

Accurate modeling

Accurate modeling requires re-creation of actual application flows. Some test tools focus on network protocols, and are very limited at the application level – falling back on synthetic application traffic, with random strings in the payload. 

An application-aware network will not detect an application or react to it in a realistic manner unless the test tool can recreate an application-level state, with regards to cookies, session IDs, NAT translations, ALG-driven modifications, authentication challenges, lengths and checksums.

So testing application-aware systems requires a tool that accurately recreates application traffic and maintains application state – and this must happen fast, for all versions of the application.

When networks become congested under normal operation, special algorithms cut in for TCP (transaction control protocol) flow and congestion control that change the behaviors of both the test tool and the target. If the test tool does not behave exactly

as a real client would, it will result in inaccurate test scenarios and misleading results. But many load test tools use custom-developed, hardware-accelerated TCP implementations optimized for maximum network throughput.

When congestion control is activated on the network, the tool ignores it and the load generation continues unchanged – with no allowance for time-outs and other realistic congestion situations. 

To model the true behaviour of the network, you need a new generation test solution designed to behave exactly as real clients would in the face of congestion and flow control situations.

Finally, you need consistent and repeatable results. Many load test tools are designed for maximum throughput at the expense of repeatability. Test results, however, often need to be compared between runs to determine whether changes are effective. Repeatable tests with consistent results allow problems to be isolated in an efficient and deterministic manner.

Need for speed

How long does it take you or your test vendor to develop tests for new applications? The sooner you can test, the sooner you can identify and resolve any potential issues. Ready-to-run test cases derived from real world applications help test teams get productive quicker. 

The larger and more diverse the set of test cases, the easier it is for test teams to reflect the ever-evolving types of applications seen on their network.

Cloud-based test solutions allow immediate download of ready-to-run tests for all popular consumer and business apps such as: Facebook, Skype, Netflix, Twitter, BitTorrent, VMware and Google Docs. Hundreds of new tests can be added and updated every month to keep up with versions and client types and future-proof your tests.

How long does it take to validate your test cases? The more time validating takes, the less time is left to maximize test coverage. Without total visibility into the application layer, you cannot be 100% sure what the tool is sending, nor can you control it at the application layer. 

You often have to resort to packet captures and debugging sessions to validate whether the application traffic is legitimate, erroneous, or even contains an application payload at all.

So you must ensure that your test tool allows complete visibility into transactions, field types, payload, and content, leaving no ambiguity of what is being sent or received during tests.

Another time-saving tip is to use a single, unified solution for scale, security and functional testing. Many tools are specialized for a single discipline, so results from one product are not usable by another – time is wasted creating duplicative tests. A unified test solution will deliver scale, security and functional test assets on the same common platform for efficient, streamlined testing and maximum return on investment.  

One more key consideration for application-aware networks in a fast evolving ecosystem is the need for resiliency in the face of unexpected variations in field data and values. 

Testing for unexpected divergences from the norm is known as negative testing or “fuzz testing” and very few test tools support this. Those that do too often rely on random “bit flipping” and flooding of malformed packets – a rudimentary form

of fuzzing, that does not help build intelligent fuzz test cases for ALL the field types at the application layer.

Today’s most advanced test tools automatically convert any application flow into thousands of intelligent fuzz test cases for every field type at the application layer – increasing code and test coverage, and reducing time for test creation, fault isolation and remediation.

Flexible operation

Test requirements are constantly changing as the production environment evolves, demanding a future-proof solution that can quickly recreate new applications as soon as they become available. This requires a flexible solution, and one that is easy to operate and automate without training and specialist coding skills. 

Already mentioned, the solution that integrates a constantly updated Cloud database of application updates and threats gives the tester a running start, because it provides immediate access to the latest tests.

However, true flexibility also requires an ability to make closely targeted adjustments – to select any field and edit or modify all the content and payload at the application layer. For example: changing the contents of the HTTP payload; crafting a valid GET request followed by several illegal GET requests in the same transport to see if it by-passes the security inspection engine; modifying port numbers, embedding IP addresses, session IDs, or URLs, etc to better understand what would happen in a real-world scenario. 

It is also important to know whether the test tool allows you to parameterise protocol fields at the application layer – can it supply a set of values for data-driven feature or scale/load testing? Data-driven testing allows quicker adjustments to inputs and outcomes.

You may want to play with changes such as URLs, user names, passwords, folder names, phone numbers, ports, IP addresses, etc. to see how the test target will respond to such changes in the production environment.

Most test tools only allow such changes in a few pre-defined fields, whereas more sophisticated solutions should allow this across all layers. Once the protocol field at the application layer has been parameterized, you should then be able to insert custom values, using a list, spreadsheet, range, or random values.

For example, you could test URL filtering by modifying the URL field of the payload with the click of a button to see what happens when thousands of alternate values are supplied.

Application aware networks present new levels of complexity for the test engineer, and any attempt to deliver realistic tests using yesterday’s test systems would demand massive time and labour, and still not achieve accurate and consistent results.

Today’s networks need a unified scale, security and functional testing solution that can recreate any application, any protocol, at any time. You need the quickest, simplest solution to increase your test coverage, accelerate your test cycles and accurately recreate the production reality of your network. 

Source: http://www.electronicsweekly.com/Articles/2013/02/28/55672/mwc-creating-application-aware-mobile-networks.htm
Contribute Copyright Policy
MWC: Creating Application Aware Mobile Networks