Welcome to
High Performance Reconfigurable Computing System Engineering Group
Dr Noor Mahammad Sk - Sponsored & Consultancy Projects
Summary       FDP       About Me       Teaching      Research       Publications       Projects      Products      Students      Contribution      Outreach       Resources       @hprcse       
   

Low Cost and High Throughput Firewall Architecture
Vegesna S M Srinivasavarma & Noor Mahammad Sk

Sponsored by Ministry of Electronics and Information Technology, Govt. of India

  1. Objectives of the Research:

  2. The following are the major objectives of the work:

    • Design and Implementation of the low cost and high throughput firewall.
    • An efficient framework for caching the packet classification rules on TCAMs in accordance with traffic characteristics is proposed.
    • The proposed design will have a two-level classification engine in which level-1 is a TCAM classifier with a smaller rule capacity and level-2 is a software classifier.
    • The classifiers are assisted by a rule update engine that monitors the rule temporal behavior and performs timely updates of the rules onto level-1.
    • Crucial challenges with respect to the proposed framework design are defined and addressed effectively in this work.
    • Simulation results shows that the architecture can achieve a throughput of 250 Gbps on average by caching only 10% of the total rules for rule databases of sizes 10,000.
    • The proposed architecture, to the best of our knowledge, is the only traffic-aware architecture using TCAMs that provides a completely deployable framework and also can scale for speeds beyond 250 Gbps (OC-1920 and beyond).

  3. Methodology:

  4. The following are the key deliverables of this research:

    1. The proposed system design uses a two-level classification engine.
    2. Level-1 (L1) Engine is a TCAM based classifier.
    3. Level-2 (L2) Engine is a decision tree based software classifier running on the line card.
    4. The packet headers from the buffers will be fed for classification through a special function register (SFR), which stores a copy of the header and then will feed it to the TCAM (L1) engine for classification.
    5. Then the SFR will wait for no match detection (NMD) signal.
    6. If the NMD=0 meaning the packet got classified at the L1, the SFR will drop the current header and then pick up the next header.
    7. Contrarily, if NMD=1, then the SFR will forward the packet header to the L2 for classification and then pick up the subsequent header.
    8. The rule update engine (RUE) monitors the classification process to identify the popular rules over the last classification window and then updates them accordingly onto the L1 Classifier.

  5. Implementation and Results:

    1. The proposed framework has been modeled and simulated using standard ClassBench rule databases.
    2. Simulation results show that the proposed caching framework can sustain a throughput of around 250 Gbps with hit rates above 80% for databases of 10K size.
    3. This is around a five- to six fold increase when compared with the TCAM alone–based classification.
    4. Overall, the proposed architecture is scalable and easily implementable within the existing NPUs frameworks and has a better lower bound for periodical updates when compared with the other statistical-based classification techniques.
    5. To the best of our knowledge, the proposed framework is the only architecture in the line of traffic-aware classification optimization techniques that can achieve throughput beyond 250 Gbps for large databases with reasonable computational overheads.
Thank You for Visiting My Webpage!!