Algorithms Documentation

This page contains the Algorithms Package documentation.

The bclassification Module

Module providing abstract class for packet classification algorithms.

class netbench.classification.algorithms.bclassification.BClassification

Bases: object

Base abstract class for all classification algorithms.

check(packetheader)

Return True if algorithm result is correct.

packetheader: Instance of the PacketHeader class.

classify(packetheader)

Classify packet by the algorithm. Abstract method. Return the list of matched rules. If the algorithm doesn’t support multimatch, the list will have only one item. If packet matches no rule, the list will be empty.

packetheader: Instance of the PacketHeader class.

load_ruleset(ruleset)
Load rules and generate all necessary data structures. Abstract method.
report_memory()
Print detailed info about algorithm memory requirements. Abstract method.

The bva2 Module

class netbench.classification.algorithms.bva2.BVA2

Bases: netbench.classification.algorithms.bva.BVA

Bitvector algorithm - second version It’s subclass of BVA (main bv version)

classify(packetheader)

Classify packet by the algorithm. Return the list of all rules that matches given packet.

packetheader: Instance of the PacketHeader class.

load_ruleset(ruleset)
report_memory()
Print detailed info about algorithm memory requirements.

The hypercuts Module

Created on Dec 4, 2011

@author: Marek Visnovsky <xvisno00@stud.fit.vutbr.cz>

class netbench.classification.algorithms.hypercuts.HyperCuts(spfac=2, bucket_size=8)

Bases: netbench.classification.algorithms.bclassification.BClassification

HyperCuts algorithm for packet classification

classify(packetheader)

Classifies packet using HyperCuts algorithm

return: list of rules corresponding to packetheader

load_ruleset(ruleset)
Load ruleset and thus create tree for HyperCuts algorithm
report_memory()
One can modify properties for memory requirements calculations by changing desired constants in Trie class
class netbench.classification.algorithms.hypercuts.Node(region)

Bases: object

Class for node representation in HyperCuts trie

add_child(node)

Adds node into the list of childs

node: child to add

add_rule(rule)

Add one rule to the nodes ruleset

rule: rule to add

add_rules(ruleset)

Add rules stored in node

ruleset: RuleSet containing rules for this node

count_childs()
Returns number of childs
get_child(index)
Returns a child on specified index
get_childs()
Returns childs of the node
get_cuts()
Returns cuts applied on node
get_region()
Returns nodes region
get_rules()
Returns rules in node
is_leaf()
Return True if node is leaf, thus has no childs
keep_rules(ruleset, dim, max_rules)

Heuristic for keeping common rules in parent(current) nodes

ruleset: rules to keep potentially dim: dimensions, where rules cover node max_rules: maximum number of rules to keep in node

match(packetheader)
Returns list of rules matching the packet header
set_leaf()
Determine that node is leaf
set_number_of_cuts(nc)

Stores number of cuts in this node

nc: dictionary of number of cuts

class netbench.classification.algorithms.hypercuts.Trie(spfac=2, bucket_size=8)

Bases: object

Trie structure for HyperCuts algorithm

build_trie(ruleset)

Builds Trie for HyperCuts

ruleset: rules for trie

create_node(ruleset, region, depth=0)
Creates node and its childs recursively
get_stats()
Returns statistics about Trie
process_packet_header(packetheader)

Preprocess packet header for classification

packetheader: packet to preprocess

search_trie(packet_header)

Searchs Trie for coressponding rules

packet_header: packet header to search matching rules

set_heuristic(name, activate=True)

Sets heuristic on (or off if activate is False)

name: name of heuristic, e.g. push_rules activate: True to turn on, False to turn off

The msca Module

Module for experiments with Multi-subset Crossproduct Algorithm

class netbench.classification.algorithms.msca.BloomFilter(capacity, error_rate=0.00050000000000000001)
add(item)
get_array_size()
hashItem(item)
class netbench.classification.algorithms.msca.MSCA(subset_count=3)

Bases: netbench.classification.algorithms.bclassification.BClassification

Multi-subset Crossproduct Algorithm.

classify(packetheader)
load_ruleset(ruleset)
report_memory()
Print detailed info about algorithm memory requirements.
netbench.classification.algorithms.msca.NLT_compare(a, b)
netbench.classification.algorithms.msca.subset_NLTSS(ruleset, subset_count)
netbench.classification.algorithms.msca.subset_simple(ruleset, subset_count)

The pcca Module

Module for experiments with Prefix Colouring Crossproduct Algorithm (temporary codename)

class netbench.classification.algorithms.pcca.PCCA

Bases: netbench.classification.algorithms.bclassification.BClassification

Prefix Colouring Crosspoduct Algorithm

classify(packetheader)
Classifies packet using prefix colouring algorithm. Returns rule with highest priority or None if no match occured.
create_graph()
Creates acyclic graph for perfect hash function
expand_pseudorules_with_colouring()

Add rules into ruleset, such that necessary LPM combinations that must be covered according to the colouring are covered by some rules.

Every rule should define a condition for all fields and prefixes MUST be coloured prior to this step.

get_dimensions_list()
Get dimensions list for given ruleset
load_ruleset(ruleset, remove_spoilers=8, colors=1)
Process input set of rules, generate all necessary structures
prefixes_count(dimension)
Return number of prefixes in given dimension If nonexisting dimension is requested, None is returned
print_debug_info()
Print debug info
print_parsed_rules()
Prints parsed rules
report_memory()
Print statistics information - number of pseudorules, number of prefixes in each dimension, number of colours in each dimension, size of perfect hash table.
set_colouring(colours)
Set number of colours in each dimension and assign colour to every prefix. Colours is a dictionary which specifies number of colours used in given dimension.
set_max_colouring(colours=4)
Assigns the same number of colours to each dimension Also fill color bitmaps
netbench.classification.algorithms.pcca.acyclic(graph)

Returns True, if given graph is acyclic.

graph: It must be undirected graph given as list of lists of connected nodes and edge weights (weights are ignored):

[[(node1,weight1),(node2,weight2),...],...]
netbench.classification.algorithms.pcca.cross_sublists_coloured(lists, conditions)
Crossproducts coloured prefixes.

The bva Module

class netbench.classification.algorithms.bva.BVA

Bases: netbench.classification.algorithms.bclassification.BClassification

Bitvector algorithm

classify(packetheader)

Classify packet by the algorithm. Return the list of all rules that matches given packet.

packetheader: Instance of the PacketHeader class.

load_ruleset(ruleset)

Load rules and create bitvectors for non-overlapping intervals (ranges) in all dimensions

ruleset - Intance of Ruleset class

report_memory()
Print detailed info about algorithm memory requirements.

The pfca Module

Module for experiments with the Prefix Filtering Classification Algorithm.

class netbench.classification.algorithms.pfca.GRule

Generalization Rule for the PFCA.

applies_to(rule)
Return true if this GR can be applied to the given rule
apply_to(lpmvect)
Generalize the LPM vector. Do not modify if GR does not apply lpmvect: dictionary of prefixes
class netbench.classification.algorithms.pfca.PFCA

Bases: netbench.classification.algorithms.bclassification.BClassification

Prefix Filtering Classification Algorithm.

classify(packetheader)

Classify packet by the algorithm. Return the list of all rules that matche given packet.

packetheader: Instance of the PacketHeader class.

create_graph()
Creates acyclic graph for perfect hash function
load_ruleset(ruleset, remove_spoilers=8)

Load rules and create generalization rules.

ruleset - Instance of the Ruleset class

report_memory()
Print detailed info about algorithm memory requirements.
netbench.classification.algorithms.pfca.acyclic(graph)

Returns True, if given graph is acyclic.

graph: It must be undirected graph given as list of lists of connected nodes and edge weights (weights are ignored): [[(node1,weight1),(node2,weight2),...],...]

The dcfl Module

Module for experiments with Distributed Crossproducting of Field Labels algorithm.

class netbench.classification.algorithms.dcfl.BloomFilterArrayAggNode(meta_labels_count, labels_count, valid_combinations, mem_width)
display()
get_memory_req()
Returns number of bits of memory needed by the node.
get_sma(max_meta_labels, max_labels)
Returns maximal number of sequential memory accesses per item
process(meta_labels, labels)
metalabels and labels are arrays of numbers
class netbench.classification.algorithms.dcfl.DCFL(mem_width=72, verbose=True)

Bases: netbench.classification.algorithms.bclassification.BClassification

Distributed Crossproducting of Field Labels algorithm.

classify(packetheader)

Classify packet by the algorithm. Return the list of all rules that matches given packet.

packetheader: Instance of the PacketHeader class.

load_ruleset(ruleset, best_seq=None)

Load rules and create optimal aggregation network.

ruleset - Intance of Ruleset class

report_memory()
Print detailed info about algorithm memory requirements.
class netbench.classification.algorithms.dcfl.FieldLabelIndexingAggNode(meta_labels_count, labels_count, valid_combinations, mem_width)
display()
get_memory_req(used_bits_only=False)
Returns number of bits of memory needed by the node. Some bits may be unused because of alignment to memory width. If you don’t want to count these bits, set used_bits_only to True.
get_sma(max_meta_labels, max_labels)
Returns maximal number of sequential memory accesses per item
process(meta_labels, labels)
metalabels and labels are arrays of numbers
class netbench.classification.algorithms.dcfl.MetaLabelIndexingAggNode(meta_labels_count, labels_count, valid_combinations, mem_width)
display()
get_memory_req(used_bits_only=False)
Returns number of bits of memory needed by the node. Some bits may be unused because of alignment to memory width. If you don’t want to count these bits, set used_bits_only to True.
get_sma(max_meta_labels, max_labels)
Returns maximal number of sequential memory accesses per item
process(meta_labels, labels)
metalabels and labels are arrays of numbers
netbench.classification.algorithms.dcfl.covers(covering, covered)
netbench.classification.algorithms.dcfl.max_num_of_matching_fields(set)

The pfca2 Module

Module for experiments with the Prefix Filtering Classification Algorithm ver 2.

class netbench.classification.algorithms.pfca2.GRule2

Generalization Rule for the PFCA2.

applies_to(rule)
Return true if this GR can be applied to the given rule
apply_to(lpmvect)
Generalize the LPM vector. Do not modify if GR does not apply lpmvect: dictionary of prefixes
class netbench.classification.algorithms.pfca2.PFCA2

Bases: netbench.classification.algorithms.bclassification.BClassification

Prefix Filtering Classification Algorithm ver 2.

classify(packetheader)

Classify packet by the algorithm. Return the list of all rules that matche given packet.

packetheader: Instance of the PacketHeader class.

create_graph()
Creates acyclic graph for perfect hash function
load_ruleset(ruleset, remove_spoilers=8)

Load rules and create generalization rules.

ruleset - Instance of the Ruleset class

report_memory()
Print detailed info about algorithm memory requirements.
netbench.classification.algorithms.pfca2.acyclic(graph)

Returns True, if given graph is acyclic.

graph: It must be undirected graph given as list of lists of connected nodes and edge weights (weights are ignored): [[(node1,weight1),(node2,weight2),...],...]

The hicuts Module

Module for experiments with Hierarchical Intelligent Cuttings Algorithm

class netbench.classification.algorithms.hicuts.HiCuts(binth=16, spfac=2)

Bases: netbench.classification.algorithms.bclassification.BClassification

Hierarchical Intelligent Cuttings.

classify(packetheader)
count_nodes(node)
Return tuple (depth, number of nodes, number of node pointers, number of rule pointers)
field_size(dim)
get_range(rule, dim)
load_ruleset(ruleset)
print_XML()
print_node(node, tab=0, recur=1)
print_node_XML(file, node)
remove_redundancy(node)
report_memory()
Print detailed info about algorithm memory requirements.
select_cut_dimension(node, algorithm)
Return the [dimension, number of cuts] list.
class netbench.classification.algorithms.hicuts.Node

The phca Module

Module for experiments with Perfect Hashing Crossproduct Algorithm

class netbench.classification.algorithms.phca.PHCA(verbose=True)

Bases: netbench.classification.algorithms.bclassification.BClassification

Perfect Hashing Crossproduct Algorithm.

classify(packetheader)

Classify packet by the algorithm. Return the list containing a highest priority rule that given packet matches (or empty list if it matches no rule)

packetheader: Instance of the PacketHeader class.

load_ruleset(ruleset, remove_spoilers=8)
Load rules and generate nodetable (table of weights of graph nodes used for perfect hashing)
report_memory()
Print detailed info about algorithm memory requirements.
netbench.classification.algorithms.phca.acyclic(graph)

Returns True, if given graph is acyclic.

graph: It must be undirected graph given as list of lists of connected nodes and edge weights (weights are ignored):

[[(node1,weight1),(node2,weight2),...],...]

The mspcca Module

Module for experiments with combined classification algorithms

class netbench.classification.algorithms.mspcca.MSPCCA(subset_count=3, remove_spoilers=8, colors=8)

Bases: netbench.classification.algorithms.bclassification.BClassification

Multi-subset Crossproduct + Prefix Coloring Classification Algorithm.

classify(packetheader)
Classify single packet.
load_ruleset(ruleset)
report_memory()
Print detailed info about algorithm memory requirements.