<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.0.2) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-cui-nmrg-auto-test-00" category="info" consensus="true" submissionType="IRTF" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="FALANPT">Framework and Automation Levels for AI-Assisted Network Protocol Testing</title>

    <author fullname="Yong Cui">
      <organization>Tsinghua University</organization>
      <address>
        <email>cuiyong@tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Yunze Wei">
      <organization>Tsinghua University</organization>
      <address>
        <email>wyz23@mails.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Kaiwen Chi">
      <organization>Tsinghua University</organization>
      <address>
        <email>ckw24@mails.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Xiaohui Xie">
      <organization>Tsinghua University</organization>
      <address>
        <email>xiexiaohui@tsinghua.edu.cn</email>
      </address>
    </author>

    <date year="2025" month="July" day="04"/>

    <area>Operations and Management</area>
    <workgroup>Network Management Research Group</workgroup>
    <keyword>protocol testing</keyword> <keyword>automation</keyword> <keyword>network verification</keyword>

    <abstract>


<?line 67?>

<t>This document presents an AI-assisted framework for automating the testing of network protocol implementations. The proposed framework encompasses essential components such as protocol comprehension, test case generation, automated script and configuration synthesis, and iterative refinement through feedback mechanisms.
In addition, the document defines a multi-level model of test automation maturity, ranging from fully manual procedures (Level 0) to fully autonomous and adaptive systems (Level 5), providing a structured approach to evaluating and advancing automation capabilities.
Leveraging recent advancements in artificial intelligence, particularly large language models (LLMs), the framework illustrates how AI technologies can be applied to enhance the efficiency, scalability, and consistency of protocol testing.
This document serves both as a reference architecture and as a roadmap to guide the evolution of protocol testing practices in light of emerging AI capabilities.</t>



    </abstract>

    <note title="About This Document" removeInRFC="true">
      <t>
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cui-nmrg-auto-test/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        NMRG Research Group mailing list (<eref target="mailto:nmrg@irtf.org"/>),
        which is archived at <eref target="https://datatracker.ietf.org/rg/nmrg/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/nmrg/"/>.
      </t>
    </note>


  </front>

  <middle>


<?line 74?>

<section anchor="introduction"><name>Introduction</name>

<t>As protocol specifications evolve rapidly, traditional testing methods that rely heavily on manual effort or static models struggle to keep pace. Testing involves validating that a device's behavior complies with the protocol's defined semantics, often documented in RFCs. In recent years, emerging application domains such as the industrial internet, low-altitude economy and satellite internet have further accelerated the proliferation of proprietary or rapidly changing protocols, making comprehensive and timely testing even more challenging.</t>

<t>This document proposes an automated network protocol testing framework that reduces manual effort, enhances test quality, and adapts to new specifications efficiently. The framework consists of four key modules: protocol understanding, test case generation, test script conversion, and feedback-based refinement. It emphasizes modular design, reuse of existing knowledge, and AI-assisted processes to facilitate accurate and scalable testing.</t>

<t>In addition to the proposed framework, this document also defines a six-level classification system (Levels 0 to 5) to characterize the evolution of automation maturity in network protocol testing. These levels serve as a technology roadmap, helping researchers evaluate the current state of their systems and set future goals. Each level captures increasing capabilities in protocol understanding, orchestration, analysis, and human independence.</t>

</section>
<section anchor="definition-and-acronyms"><name>Definition and Acronyms</name>

<t>DUT: Device Under Test</t>

<t>Tester: A network device for protocol conformance and performance testing. It can generate specific network traffic or emulate particular network devices to facilitate the execution of test cases.</t>

<t>LLM: Large Language Model</t>

<t>FSM: Finite State Machine</t>

<t>API: Application Programming Interface</t>

<t>CLI: Command Line Interface</t>

<t>Test Case: A specification of conditions and inputs to evaluate a protocol behavior.</t>

<t>Test Script: An executable program or sequence that carries out a test case on a device.</t>

</section>
<section anchor="network-protocol-testing-scenarios"><name>Network Protocol Testing Scenarios</name>

<t>Network protocol testing is required in many scenarios. This document outlines two common phases where protocol testing plays a critical role:</t>

<t><list style="numbers" type="1">
  <t>Device Development Phase:
During the development of network equipment, vendors must ensure that their devices conform to protocol specifications. This requires the construction of a large number of test cases. Testing during this phase may involve both protocol testers and the DUT, or it may be performed solely through interconnection among DUTs.</t>
  <t>Procurement Evaluation Phase:
In the context of equipment acquisition by network operators or enterprises, candidate equipment suppliers need to demonstrate compliance with specified requirements. In this phase, third-party organizations typically perform the testing to ensure neutrality. This type of testing is usually conducted as black-box testing, requiring the use of protocol testers interconnected with the DUT. The test cases are executed while observing whether the DUT behaves in accordance with expected protocol specifications.</t>
</list></t>

</section>
<section anchor="key-elements-of-network-protocol-testing"><name>Key Elements of Network Protocol Testing</name>

<t>Network protocol testing is a complex and comprehensive process that typically involves multiple parties and various necessary components. The following entities are generally involved in protocol testing:</t>

<t><list style="numbers" type="1">
  <t>DUT:
The DUT can be a physical network device (such as switches, routers, firewalls, etc.) or a virtual network device (such as FRRouting (FRR) routers and others).</t>
  <t>Protocol Tester:
A protocol tester is a specialized network device that usually implements a standard and comprehensive protocol stack. It can generate test traffic, collect and analyze incoming traffic, and produce test results. Protocol testers can typically be controlled via scripts, allowing automated interaction with the DUT to carry out protocol tests.</t>
  <t>Test Cases:
Protocol test cases may cover various categories, including protocol conformance tests, functional tests, and performance tests, etc. Each test case typically includes essential elements such as test topology, step-by-step procedures, and expected results. A well-defined test case also includes detailed configuration parameters.</t>
  <t>DUT Configuration: Before executing a test case, the DUT must be initialized with specific configurations according to the test case requirements (setup). Throughout the test, the DUT configuration may undergo multiple modifications as dictated by the test scenario. Upon test completion, appropriate configurations are usually applied to restore the DUT to its initial state (teardown).</t>
  <t>Tester Configuration and Execution Scripts: In test scenarios involving protocol testers, the tester often plays the active role by generating test traffic and orchestrating the test process. This requires the preparation of both tester-specific configurations and execution scripts. Tester scripts are typically designed in coordination with the DUT configurations to ensure proper interaction during the test.</t>
</list></t>

</section>
<section anchor="automated-network-protocol-test-framework"><name>Automated Network Protocol Test Framework</name>

<t>A typical network protocol test automation framework is as follows.</t>

<figure><artwork><![CDATA[
  +---------+                                        +----------+  
  |   RFC   |   +----------+   +-----------------+   | Test Env.|
  |Documents|-->|          |-->|  Tester Script  |-->| +------+ |
  +---------+   |Test Case |   |   Generation    |   | |Tester| |
                |          |   | --------------- |   | +-^----+ |
  +---------+   |Generation|   |DUT Configuration|   |   |  |   |
  |  Human  |-->|          |-->|   Generation    |-->| +----v-+ |
  | Intent  |   +----------+   +-----------------+   | | DUT  | |
  +---------+         ^                 ^            | +------+ |
                      |                 |            +----------+
                +-----------------------------+           |  Test
                |   Feedback and Refinement   |<----------+ Report  
                +-----------------------------+           
]]></artwork></figure>

<section anchor="protocol-understanding"><name>Protocol Understanding</name>

<t>Protocol understanding forms the foundation for automated test case generation. Since protocol specifications are typically written in natural language, it is necessary to model the core functionalities of the protocol and extract a machine-readable representation. This process involves identifying key behavioral semantics and operational logic from the specification text.</t>

<t>In addition to high-level functional modeling, structured data extraction of protocol details, such as packet field definitions, state machines, and message sequences, is also an essential component of protocol understanding. These structured representations serve as a blueprint for downstream tasks, enabling accurate and comprehensive test case synthesis based on the intended protocol behavior.</t>

</section>
<section anchor="test-case-generation"><name>Test Case Generation</name>

<t>Once a machine-readable protocol specification is available, the next step is to identify test points based on human intent and extend them into concrete test cases. Test points are typically derived from constraints described in the protocol specification, such as the correctness of packet processing logic or the validity of protocol state transitions. Each test case elaborates on a specific test point and includes detailed test procedures and expected outcomes. It may also include a representative set of test parameters (e.g., frame lengths) to ensure coverage of edge conditions. Conformance test cases are generally categorized into positive and negative types. Positive test cases verify that the protocol implementation correctly handles valid inputs, while negative test cases examine how the system responds to malformed or unexpected inputs.</t>

<t>The quality of generated test cases is typically evaluated along two primary dimensions: correctness and coverage. Correctness assesses whether a test case accurately reflects the intended semantics of the protocol. Coverage evaluates whether the test suite exercises all protocol definitions and constraints, including performance-related behaviors and robustness requirements. However, as test cases are often represented using a mix of natural language, topology diagrams, and configuration snippets, their inherent ambiguity makes systematic quality evaluation difficult. Effective metrics for test case quality assessment are still lacking, which remains an open research challenge.</t>

</section>
<section anchor="test-script-and-dut-configuration-generation"><name>Test Script and DUT Configuration Generation</name>

<t>Test cases are often translated into executable scripts using available API documentation and runtime environments. This process requires mapping natural language described test steps to specific function calls and configurations of test equipment and DUTs.</t>

<t>Since tester scripts and DUT configuration files are typically used together, they must be generated in a coordinated manner rather than in isolation. The generated configurations must ensure mutual interoperability within the test topology and align with the step-by-step actions defined in the test case. This includes setting compatible protocol parameters, interface bindings, and execution triggers to facilitate correct protocol interactions and achieve the intended test objectives.</t>

<t>Before deploying the test scripts and corresponding configurations, it is essential to validate both their syntactic and semantic correctness. Although the protocol testing environment is isolated from production networks and thus inherently more tolerant to failure, invalid scripts or misconfigured devices can still render test executions ineffective or misleading. Therefore, a verification step is necessary to ensure that the generated artifacts conform to the expected syntax of the execution environment and accurately implement the intended test logic as defined by the test case specification.</t>

</section>
<section anchor="test-case-execution"><name>Test Case Execution</name>

<t>The execution of test cases involves the automated deployment of configurations to the DUT as well as the automated execution of test scripts on the tester. This process is typically carried out in batches and requires a test case management system to coordinate the workflow. Additionally, intermediate configuration updates during the execution phase may be necessary and should be handled accordingly.</t>

</section>
<section anchor="report-analysis-feedback-and-refinement"><name>Report Analysis, Feedback and Refinement</name>

<t>Test reports represent the most critical output of a network protocol testing workflow. They typically indicate whether each test case has passed or failed and, in the event of failure, include detailed error information specifying which expected behaviors were not satisfied. These reports serve as an essential reference for device improvement, standard compliance assessment, or procurement decision-making.</t>

<t>However, due to the potential inaccuracies in test case descriptions, generated scripts or device configurations, a test failure does not always indicate a protocol implementation defect. Therefore, failed test cases require further inspection using execution logs, diagnostic outputs, and relevant runtime context. This motivates the integration of a feedback and refinement mechanism into the framework.
The feedback loop analyzes runtime behaviors to detect discrepancies that are difficult to identify through static inspection alone. This iterative refinement process is necessary to improve the reliability of the automated testing system.</t>

</section>
</section>
<section anchor="automation-maturity-levels-in-network-protocol-testing"><name>Automation Maturity Levels in Network Protocol Testing</name>

<t>To describe the varying degrees of automation adopted in protocol testing practices, we define a six-level maturity model. These levels reflect technical progress from fully manual testing to self-optimizing, autonomous systems. The classification is intended as a reference model, not as a fixed pipeline structure.</t>

<texttable title="Automation Maturity Matrix for Network Protocol Testing" anchor="auto-level">
      <ttcol align='left'>Level</ttcol>
      <ttcol align='left'>RFC Interpretation</ttcol>
      <ttcol align='left'>Test Asset Generation &amp; Execution</ttcol>
      <ttcol align='left'>Result Analysis &amp; Feedback</ttcol>
      <ttcol align='left'>Human Involvement</ttcol>
      <c>L0</c>
      <c>Manual reading</c>
      <c>Fully manual scripting and CLI-based execution</c>
      <c>Manual observation and logging</c>
      <c>Full-time intervention</c>
      <c>L1</c>
      <c>Human-guided parsing tools</c>
      <c>Script templates with tool-assisted execution</c>
      <c>Manual review with basic tools</c>
      <c>High (per test run)</c>
      <c>L2</c>
      <c>Template-based extraction</c>
      <c>Basic autogen of config &amp; scripts for standard cases</c>
      <c>Rule-based validation with human triage</c>
      <c>Moderate (Manual correction and tuning)</c>
      <c>L3</c>
      <c>Rule-based semantic parsing</c>
      <c>Parameterized generation and batch orchestration</c>
      <c>ML-assisted anomaly detection</c>
      <c>Supervisory confirmation</c>
      <c>L4</c>
      <c>Structured model interpretation</c>
      <c>Objective-driven synthesis with end-to-end automation</c>
      <c>Correlated failure analysis and report generation</c>
      <c>Minimal (strategic input)</c>
      <c>L5</c>
      <c>Adaptive protocol modeling</c>
      <c>Self-adaptive generation and self-optimizing execution</c>
      <c>Predictive diagnostics and remediation proposals</c>
      <c>None (optional audit)</c>
</texttable>

<t>As shown in <xref target="auto-level"/>, the automation progression can be characterized along four dimensions: specification interpretation, test orchestration, result analysis, and human oversight. Each level reflects an increasing degree of system autonomy and decreasing human involvement.</t>

<section anchor="l0-manual-testing"><name>L0: Manual Testing</name>

<t>Description: All testing tasks are performed manually by test engineers.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Protocol understanding, test case design, topology setup, scripting, execution, and result analysis all rely on manual work.</t>
  <t>Tools are only used for basic assistance (e.g., packet capture via Wireshark).</t>
</list></t>

</section>
<section anchor="l1-tool-assisted-testing"><name>L1: Tool-Assisted Testing</name>

<t>Description: Tools are used to assist in some testing steps, but the core logic is still human-driven.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes test script execution and automated result comparison.</t>
  <t>Manual effort is still required for test case design, topology setup, and exception analysis.</t>
</list></t>

</section>
<section anchor="l2-partial-automation"><name>L2: Partial Automation</name>

<t>Description: Basic test case generation and execution are automated, but critical decisions still require human input.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:
  <list style="symbols">
      <t>A framework that generates basic test cases (e.g., covering mandatory fields and FSMs defined in RFCs) and corresponding tester scripts and DUT configurations.</t>
      <t>Topology generation for a single test case.</t>
    </list></t>
  <t>Manual effort includes:  <list style="symbols">
      <t>Designing complex or edge case scenarios.</t>
      <t>Root cause analysis when tests fail.</t>
    </list></t>
</list></t>

</section>
<section anchor="l3-conditional-automation"><name>L3: Conditional Automation</name>

<t>Description: The system can autonomously complete the test loop, but relies on human-defined rules and constraints.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>Complex test cases generation based on semantic understanding of RFCs (e.g., core function modeling from RFCs).</t>
      <t>Minimal common topology synthesis for a set of test cases.</t>
      <t>Automated result analysis with anomaly detection.</t>
    </list></t>
  <t>Manual effort includes:  <list style="symbols">
      <t>Reviewing the test plan and confirming whether flagged anomalies represent real protocol violations.</t>
    </list></t>
</list></t>

</section>
<section anchor="l4-high-automation"><name>L4: High Automation</name>

<t>Description: Full automation of the testing pipeline, with minimal human involvement limited to high-level adjustments.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>End-to-end automation from RFC parsing to test report generation.</t>
      <t>Automated result analysis with root cause analysis.</t>
      <t>Automated recovery from environment issues.</t>
    </list></t>
  <t>Manual effort includes:  <list style="symbols">
      <t>Defining high-level test objectives, with the system decomposing tasks accordingly.</t>
    </list></t>
</list></t>

</section>
<section anchor="l5-full-automation"><name>L5: Full Automation</name>

<t>Description: Adaptive testing, where the system independently determines testing strategies and continuously optimizes coverage.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>Learning protocol implementation specifics (e.g., proprietary extensions) and generating targeted test cases.</t>
      <t>Leveraging historical data to predict potential defects.</t>
      <t>Iterative self-optimization to improve efficiency.</t>
    </list></t>
  <t>Manual effort: None. The system autonomously outputs a final compliance report along with remediation suggestions.</t>
</list></t>

</section>
</section>
<section anchor="an-example-of-llm-based-automated-network-protocol-test-framework-from-l2-to-l3"><name>An Example of LLM-based Automated Network Protocol Test Framework (From L2 to L3)</name>

<t>The emergence of LLMs has significantly advanced the degree of automation achievable in network protocol testing. Within the proposed framework, LLMs serve as central agents in multiple stages of the pipeline, enabling a transition from Level 2 (Partial Automation) to Level 3 (Conditional Automation). By leveraging LLMs with domain-specific prompting strategies, the system is able to extract, synthesize, and refine testing artifacts with minimal human intervention.</t>

<t>At the protocol understanding stage, LLMs can ingest RFC documents and identify structured components such as protocol field definitions, message formats, and finite state machines. These outputs, often difficult to parse using traditional rule-based methods due to inconsistencies in document structure, are extracted by prompting the LLM with carefully designed templates. The resulting structured data serves as the semantic backbone for subsequent test case generation.</t>

<t>Based on the extracted protocol semantics, LLMs are also capable of generating test scenarios that aim to exercise key protocol behaviors. These scenarios are proposed in natural language and then instantiated into formal test cases through alignment with predefined schemas. The use of LLMs allows for dynamic adaptation, enabling the test generation process to generalize across different protocol families while maintaining context awareness. Although the coverage achieved is not exhaustive, the LLM is often effective at identifying common corner cases and expected failure modes, especially when guided by selected examples or constraint-based instructions.</t>

<t>For the script generation phase, LLMs assist in mapping abstract test cases to executable configurations/scripts for both the DUT/tester. Given appropriate configuration/API documentation and environment context, the model synthesizes CLI commands, configuration snippets, and tester control scripts (e.g., Python-based test harnesses or TCL scripts for commercial testbeds). This process is enhanced by retrieval-augmented generation, where relevant code examples or past case patterns are integrated into the model's prompt to increase the reliability and correctness of the output.</t>

<t>Finally, during the feedback and refinement loop, LLMs analyze execution logs and output discrepancies to identify failure points. While current models may not fully replace human diagnosis, they can summarize error types, hypothesize root causes, and suggest concrete revisions to test cases or script logic. These suggestions are then validated by experts before being applied, ensuring both efficiency and correctness.</t>

<t>Despite these capabilities, it is important to note that LLMs are fundamentally probabilistic and lack strong guarantees of determinism or correctness. Therefore, while the LLM-enabled system can execute a full test pipeline with reduced human effort, human oversight remains essential, particularly for verifying generated artifacts and handling ambiguous or novel scenarios. Nonetheless, the integration of LLMs marks a clear advance beyond traditional script-driven automation tools, and provides a practical path for elevating protocol testing practices toward Level 3 maturity.</t>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t><list style="numbers" type="1">
  <t>Execution of Unverified Generated Code: Automatically generated test scripts or configurations (e.g., CLI commands, tester control scripts) may include incorrect or harmful instructions that misconfigure devices or disrupt test environments. Mitigation: All generated artifacts should undergo validation, including syntax checking, semantic verification against protocol constraints, and dry-run execution in sandboxed environments.</t>
  <t>AI-Assisted Component Risks: LLMs may produce incorrect or insecure outputs due to their probabilistic nature or prompt manipulation. Mitigation: Apply input sanitization, prompt hardening, and human-in-the-loop validation for critical operations.</t>
</list></t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>







<?line 339?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>This work is supported by the National Key R&amp;D Program of China.</t>

</section>
<section numbered="false" anchor="contributors"><name>Contributors</name>

<t>Zhen Li<br />
Beijing Xinertel Technology Co., Ltd.<br />
Email: lizhen_fz@xinertel.com</t>

<t>Zhanyou Li<br />
Beijing Xinertel Technology Co., Ltd.<br />
Email: lizy@xinertel.com</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA6VcaY8b17H9zl/RkIBEA5OU4+XLZEHGI8kZZGQLkowkD0Ee
muxLsqNmN9PLzFCe/Pd3TlXdpbnIyxNsaaaXu9R6ark9m80mT58+nTzNbure
tbXrZy/afNVnr/P2Q9Hc19l7t91Vee8mfOitq/Oty/pN2WWrsnLZqm22WcE3
Zn1TNLN9M7R8ZLZrm75ZNtV8W2R9k61dn3V93vaumGMcnUPGWjXtNu8zDPhE
x/mDH+NPsz/cN+2HddsMO/wslzDck7ks5VXTZmVd9mVeZZ3rh900w4tZU1f7
rHZOZnVF2WOxmKRsuz5bVM3yQ9as8Kurio4L+Z6PP+nLvnJP5LWO7y1cttzk
9doVv88KV7neZU/yxaJ1d0+ycsV52kze4bK7TdP2HOuq3mcNZmuzZQNi1n22
zGuOxWW4Ypothl6Gzlu3GqqsbnpOVtZ92xTDEs+1bdPKst41pIysMrsvq4qv
YZNZPvQNqFUu8wrrLoa2rNe6e64Lc+8zDJ4NtS1fSfWiqX8LCtfLaiiwk9nn
nz/JQL0nM/K167Gn2qhUCX+5gtt84aou3AGTsp/BHhtRF9GBCYs9xuIIfdNU
QlvsHRTCD7y6HNqWhLpzbVc29e+xFyywaJYc7QmnzdxDDgF0upP3FLzeJJIz
dNmHNt9SUGftanmZbfp+110+f74u+82wmC+b7fNlvmiep09hnH9AUsic1mGk
pZO1YB1lq0QwJmc7XWyeFeUKP3ClKq6k0LWQOBAOCwXPuQtuDs8sN4F0kO9n
84dtJRv6++vbaeb65Xw+v+Cmnj5VWbrMnlzdzK66ruwgLdkrrNiRwPLSlTIe
179zvVx9YwoGBe16iMGTiR8lvpnXhX+T67p1d+SpjJfMdH5ElXkOeXV79d2b
908mS6xh3bT7S0jGqplMjFWXJhzLoZzV23Y9o6DOKAoQtkk3LLZlR9L0+x2e
vXn7/tWkHrYL115OCox4OYHGdKDe0F1mfTu4CSb9cgIpyi+z73eulfV3sp/X
eZ2v3Ra8mATxuwx7iHdhqzqXt+DCt3xk8sHt8URxOclmmTdOWa875bU80Im/
1TYeBLNcQd3k+p2rB6w1y/ykr99+i990UwfTZRCBsrrMSI0/l22/mjct5sn4
SBRTbD7v23z5wbXz0ulDz/Ef33o+mWBNMC5cMt7MMhiNSqn9jwZ6fz2Uchnv
5HX5UdZ4mb3vsKHNkGc/1KWoVb+Xp5yuBxza4+U/9/bY3BXDfFmfmGKoP7rs
b+5XzHG///jFl3/mz938J+f5a17euxra9Gs28+H+i6/OTpQdzPT3Mm82Q4l/
3S+f6qF0D/r+Eekmk1p0HK9dTibUi/jbbAbJWnTkcT+ZiPmCygwioLAukPme
Yk19zL0+rkaa7+USDBc7qhJLL+ZlNEhzSUPJoVVd5jCXjnd3TTca1tWwizvM
B/PpOq6BTpTXmloW1A2Q47yLI/Ne6zZq36ayCNjPzsGv16adU79STNUt23LX
i7pCsVfletBnsm5fYxPY51Ruwi22Qil6hbJWve03UKD1JlvBGy+gGtnW0ZuV
3Rau8abO8gLuVJeB7QVqFjICiJlth6ovZxWNXbZt4L9JLHVL0RbiH3jPfj/N
WphxUlSgDKVlj5v1AIpg+0vwGGzKnontzD6/oHHXhzhY3WybQe1SXuQ72Uq3
Bxe34ZWvL6Yc6K4sOEkOFNQOS8wNMuU73MjpJwBV7vJqUDbraHd5vZTf4pqX
+S5flBV270AKDt/msvTWLUkCfUmoKJ4bcIvGi8wFyHBVVYJdS4f18M5yqPIW
+8DfcGMVqDDAdCrFuPrb192F0jhKDqDIQGEGNbNNcw+xBWGXm7qpmjUW5QEP
NlaVBsHqDdekHnbF1WAJoHoHCKOb2U+9oIj84y75dWii5wfK07n2DhMuALgo
qbniCm5PTCwkS4isxJT7TV5s852A0aEsbEV3TTUIbU9MiQvQ2nLphJgg3qbn
Y6BvK1TH5sccEW3flkVRuYkiagF24jsmV4k6dTu3DG6lk1VQB/JdWVQgBwis
Ip7HtWwdXAHR6QZgonXg28bldyX+FWEWeQV9AUUJ7TqagKXnJUVuva4IwrIP
zu3A/6WbezePzcn8XQYRLAtvajANcI+7w/5/Czq7DWZrWrEEFVl9D4QlNPS7
wlOqg9B/GE0YlSXUvFmBpYFtuAdSvn11DeN0U3u53cNx4tFAWBEfJQ7ehP2t
o0nijGVdUAq9XDNqmWZVcz/Lofg9Ma5bUjX3wv0uF9HvXXg4w14g1UMrYD1f
LgHxWzFctp+qXJlVM7nYtfDOebsnbY1NGiGomCgBsIVt/oFXEnN5pyLYl1vy
zHMTqguuNa3EGRVQ91pE/MhBiOkWBxGN65HZ94NGNTUhYVDRjYVj6hWyU4v4
H9wKKigmTEB67e6PhNSUt6/26ljidKa6nYRWjI2AtSh7Q+WA58I6h7qAX+0x
E1Z7zonIVfMfGNcCA12f9wizRU6HFn0GhKmH+Ow2eVd+5JY5d95CHrtyjZdb
h1BBdPehVFp9qJv7yhVrpyOn7lesvvhGmvp8SfVmtAE5oRtTfqr5qlw0T6lr
8uHNse+dauwcWJxXXZO4rq58MMe1rLggT35zKuZTuuxzzvC1OCMIEM0UcOrH
EzbthMejBp6TIeEsSFXpNGJk1XwGM7/3lnQKC1Tt1P0o9AWvvB9zo/CuEwI2
EqGWbfCQQkio42oQU71uQI159pIe0YgAeRT3i9gVsUAnupWYXG7lnHg1XJD4
KpOfvNoH4LEZoBW0I27n8Bf0YU6L/YKMUA6KVCzbpt5vOyC8Fz+8v8RtWkMA
REwk1hMKi78RxWRXgaRqMgW6JfBJQaF4JwyMiCb8Hih/o/kCUwYX9C+MjL1Q
B2mDHDAOn4mO/GD+Q+EVwXhwyyAYQfvot+DrLxHxEwfcehzwmr5jMnn1Drde
kSoueydDvQZ/IK7waG9usPHEViOAXEPQt2STpJOwADx3fYvnrpvtllu/xavp
TRIwu8Y6SMORyeEqQTjVKBWWst4NaqCCmOWRyt5HzW3Ud2JFMG5tWxeF3eki
xUu6/wxOwUlOYrQtZaoZepF3b50oDEZVEZJz4TLmc3Xelg3l5btzRhrK32La
slVXCJrsYUzsxXk2dgBYSiWWAYPRqWyxFho5ul9omzsBWap8T3XF1iVLBGWt
GIf8bu6l9wU1q9nJ+G841uXkheaRBE4nd5MQgyuWi1NExHXRQNG3cMEZQ/bW
6Ke67cXPRJ68OgN7bLNGDfXs9CSCj735MnSqyYIDuQ2EL/z6MZyQB2Tde1ij
EHFEKNopccqYEYpNW4FQRF4CeDXlJIwB7ei0LSAR+IAV1k7Xl28ZhWMAqtAX
cwoEDJ4GMS8NzlMrlMg3td9h7x4USHqiwrXgx04Nz2IfqN5I5oPEpspzduAQ
7HxKQ1EQqrlkkG4Q1I2nffqzcFuhJ59T3CYWR5Cb8ULcqDBAogZBZZGO4q3a
YkYzsx/Fyx2zHpaGNIKNwlNB/iIbtRuwAoIMYzjTJZ6TphFDN8hIVHcw3wlk
X1Ti6psH/+TUlupl1Xz6EWtTPmGoAFTBKUUuUYYkUanGgU9umMdrFnR7nAQ6
JgjRXlYDo14HYKBpi0hO97DT2c4JOy3HXwGLXlYWn2Hl5yzJT9iPXJnpHixu
SrGmgRfTyMCiAPElNt5V5jmc6sEdjc9AueG7hLkxF2BYr6mArwW5Atfri60H
bskExcgj25rN/MCDTt4bJX2cCDGDU6adOnCfzzze70BeOnIwH9bQMU5YQVrv
MWunadQLakee3ZVtP3xioFdv32IAbuEZfrzwwwkBJGvfXQQ1jtxghvLqUMSU
CcJgCPbHBJPbpEJ9L9QhKyMvEZ/kbXGadSY5PQT/GA+I1BoGgAUARyBxituJ
bT4yvsGIoh3+KQEbVl2Q92FoIQFdskuvNZwrCsxCLVXLWSAgZW6YnPjJi0IM
SUTjcrWKqboJPoVb3YtTHVGRGvGlmnBx/93lZLQk00/a5GWDKCAIqWWgS4qE
ljTSGGwEtWQeyMtQL5No2iDgIQYzaVL4Gb1/qkNSP0lTZs4zNkSnwiOAfgLl
KTjpdrPFfsZ/k2ySLiCYjMCTq+wekerMR9FxERIlhPkLhKIl2TLOrUGjEWSQ
lyDt5CvROMCu5JHL7Bu3aoLF03xUmGUauCaufeF8eU0kPHUay/HMnVlDs/t9
amFH3gXKyFLVBW2KuFRKhX88Tj/eFiVAoP26idYLAV4SmoLwRbnsRRQX+7gA
j6vm2Q87BmWyKDGdFhPsNLRX/zjekRSwVIGTbBZY1TetS+W7lFyb1SEFID/r
EQ2x2ESD8vXc7MiYFSIBLwMgV6iKYPmmHi+9M8s6knFT2WnYqUAj5lkU/vEy
tZE5JUAY0sSH2GRRYkfU+sVIKUkye1dyCqbBalHaPEoThKXrmJ0VEZF4v18z
JoE29ruQPWqchu/qVJaNSFh+bGMOJorIg8ylsU6MUxGRLtdLRXn6U7W1WInD
01d+eadD6DTeTrKmIqLqQ6Gc2Zk/duOzmf/z2bknD//EV/iOjfOI/9++uraf
xo+kvyZXH3XDL+u7+aMf5YUFI93jbPanxzin/WoMVAH2Vz/zQz6e3NNjsPqy
NP7/bUgB+aU/6mOufQyjjP88jn98zA42ZFc/m/3rk2uJM8vzR0bzMYyv/0Tq
/kWSCNlpuhzuKNLlLlnLo4TCdf+LePQoYp89ntmR/vnXEb1GV07w6CdofOpK
uuCToxxv4XA7ybiSUTm3ile+GkRD8jZWi3D3D+mIb92OOfDs/7mcCS3D02gI
fkiTS7j35mTWSYr9nZX9cccMQazijbx6THvOs3clcci5AsHYLN4jtqexZx6P
OT0YI1+9mTKOLVMkz14EqYFp8MkGhwCHFMhrWi7OrZZaKpYspmm2Z9a6vJD8
CUy/1i1t5eIdfNgR4oyyIERa7SXX6vYhNyPNOlYcUOfjq/vcRbOG45A6HFc0
zgYxaj7Osm7K9cYypgnMkx1LxJhU21hr9xs7rPgopIJHDZVP1uR7bcHR/Kym
oabm440sBuW2JPbahYQSoWmnqA0W4kSJdTT5SIR8BjZZ+Jjio6zsohpws9S+
lIyYA++5HPTLuw9EtDV4JjgvTV6P444okKE6m2mCvamt3NIzSVqczLRRS6I5
j0YPd76XdOexBJ2WcqHYHbjAZxTa1EyUCHYuxa97oTJ80pTElGGpPqcr9tSE
2GmWZ8vLzKAxjdynCQCLQGysQ/jRlneSvIdAamYql8eAS+DuFgpMRroz2tF0
VLSC7rXA+zXVhNxXATPFIYtU+htNN0gpjsn6VFBU9LCKWnNF3VG04kC9Rsuz
krYMaCxSzBKph7FEhHxa8B5FKMDpkBlS60bTZGlAIrXXKKF32qnmU3UxKsme
ufl6PlVclLHm1W+6iwSvSZxHNWJurFi7JPs7F5+chmtJ/iZmIXxs+FFj0gY7
JqWsBle7ta6PGSiGwP5mMp70++xDQvNcb4XnJsuwGLryxVPLUE8tlRRnjDOw
n405cJbPxcppZQf0g2nQ9sNtXlkGEtIw1IENOvg8m0gexUp3pJbPERTpRGWa
o/P58gKsY9aSCWUYji1dRFFutbED8UcqpWoqlCdkQHKH5THLQmsNNQ1WzdRg
1tatmKToxmYkOoAD18NJTAT8ertREk5Do4G1CMQS7bIUEaiq1JAHUx26Ckxt
R+mCGPzDLlUaOZpR0xfbZoEoWHY7TpD+pbln48U0hPtREDUGC8qAMYdOg+xt
+SDp9CN37XMFYELOskQXmiHSrpm63O1crwFfyYhmo82I+XaBpygE2/wDFqGy
JKV/Lx0uJqLZxMhiUQ+zsVo5DRChmy1ZQf8ReejfVkZrjrqlU2IvKvOy4lwh
5LA+rdMKPYwvnHkdqoGhtO1SN/EutgYdQe2x+3h/irhi/CqfdGrSwo6PIY3m
3pVkV29uQkUlxt7tULMmD9tzV7ZNvfW5zgTMhJB3m++kznnIvsQRqGTCVYkC
B7PrMUlGLeyOWdsFO5kUA5Q0zOIoKuwPYmQj3VhG2IJ66MCkA7dv1qJAU20P
9rmdaDKYzY7hNS5ALXAvw13VO/GqMCdNFSBf+v7BhtK60HaQlKxE4AL0tOFH
4ndznaOMmWYzKwT9McQf5dAUusVGk3QQiopxMLg3eKLeN2RgfSP0ET3TVFfI
gmS2KAWGhRydz1hAS9ZrerFxXdXsZeInYrbB+sKAftydG5tAWXCz+LcqoSTs
LDVXuF3V7EeJmJTxMp94Ct1WSnoP/iPexFqtp8fKYL7+Dk1gZ5NV4NUcp7Z/
nl1V/UaqXiMvGPpXotZwRpUND5V2oevJJ0l8uW3ogu1il51k0hr23tS90rWs
IDZkh3pTv3PYpm3Z+d0Sy/sqI2RTjVLrpCqvuuS5xulcMHU6SgUY6nF2KzSf
snqQ9PkGyDmKog5KnYkCSJtdTjeXFD214G6OWwj+4N1dFKqUjCorwXUGvHFC
cBQp5lEN0qSnIvkUh84PQHpIOyqOONMWEGM5SSeG+FUF1BeIj5NvPi2H5TGb
7eFvHOB4vsDmqM2uPYwsUzSjpXrBpTQBi1xKRGrXvdFOIck2doUb3JJowJs8
mZVyuqqae4h+4bvv2IwnCg0sdpwlzoZdIRAlySrGzcVC9MIlkiQaB82qCDgM
PBYxgV7tlVuWxrgKXStn8h/eT7byfBfBh6xm25ACvhUA1AJ+1LL62SaySIb3
dBdp+aOgPLkAyNw49NhI1Nx1ilpXGlVgsVNvpNnzJrMnaq4BRAhC5ABMFjqo
qYoiyHutxhJtBJ2KcO2enRA8TNPhnY41bR9Ee6rEiDmNxmPHqATPWrqD3rUA
odrrEKp1SfU8QiLpG9glRf8CayWOnmkbIDgZwGIxuNAU1vS2AAifKPzSWpki
MRVY7MysR1OTGERb76ELMLE3GgP4YGjSJq/uWRoIXMzPhjYwKqDwyD6ukhhR
jYOpWWiiBADcWVuE4q+oCLBWWBfxbQ1xZIgrcmgeFugbEBXE84DMGiRM/bcN
TLcomTeD61h3yGOPuA4VsoKhZVxR4qiJeS5WL7xZNc3OV1C7sIooXdJHwW5i
bAHkdzv2ZTursBNvBVg9TlFY04h14ib0YQAWgMqpBvjE5I08kImm7AZkKz2a
MrcyTjHKwTCxdfNJWuXgCl77TkDrJoTofaIV4X0TcK6lJlrRxwKscJo/TKoe
edHs+tN9ALGfGqGDM+c1anoMLYqSwDtoRrRgUjsRxaBJKxcpddy6n/ShdK5a
zbCoclt+lLAlady3TkSFtActl2UX3e5Bh7ksb6p6xTur8oHpsXLHrGOSuyPp
H5XK2aMUY260i8f1oTJgGW6x4lcdUydJ5eA3SY3QP/lWysXBNeCZ4BvsCS1M
3Kj/tiy53Jk8Wqr78VN58E/e/MWPfuKRRxLnc13za+Vbq+jsIP3/KmWtWUY7
KHF9e2PtwNHihNG0lScGfTBFax1dx5yJrouHp3NSKsuqfpdQcibnBQoGDJ3K
FI8yPvpQ1h9H9C3xuBubieOi0j3ele5en8bSmZ2TEWXCEjbj2c4jWZiji8C7
7PYLLys6Y9h4yGnr/W9kUIo5PEdEaZAU7z9WekbAnJtYdAjWUPkh/UkAX2vV
3Cpb7tes2bE3VLLJz2xLFjp4OvdDDUJdhHV/aZIbJwhBhxH1MXvjwzHJ3sXa
iAwoCG/c0stl3EZC59DoXJK2vQu0AI+GHZu5ukaamkAFDyxkXV/ZQzHXrrWS
cqylj9n3PlSbFcwJJweZrP2rLmZ9M2O6ObGFj5o0s9DIPLJvRDaXJTAv2S12
VdYltpI90869tXgPOMwLXfPXuuYrf9woWFlf9OCGaPLCgaQDWh7Yw5HevAEN
So2WosP2S1UQLNhW+tpz0YLv4M2yZ83OSi/5APBsnP/xMnsqB0HVusv51D8+
OeWI8ENbPohYnj2N+l85QwPsfC/5iB9/jEP/97/T1AXaEsU3aPKltiPdoVPe
50HluEKaAD2oSYwEwU4nHHSWa/vOyQbzRo4vrDf9qKs9ZEUlsRIa29WfUl0t
RjE/pTEDsKV/0Bc6gnm3/Nrt55fexATv/SICyUvE9IljZIFI8Evsd1UDy+4v
K7HIuRSnHUVsXrwOFBS54PnGyKezxzv8KYyQ5bHj+sGQT6MMelA4IqmkeOXM
UzzrpEgO878X2ykZwtrnvChHalnVPghwt/KDFV7saIG0tv2NASO29uHCU/J3
lzJuPCN9mqBxbku12XyUz45H+AMSY2ZQPwAQ6rAaxfMDApLAEK6aefkEuRPt
CVmu9MxM1OY8GqPQZKaJMIwnWYGZFxc7OBbWEjrUx1nhc4zUTNnS7Wxe5Zqn
5ReXtO0S8MTFHxBSPdap0vhBGo60DptSgob41odfB7sI+gIb+ssIy6PeGe4c
nqzy8Vjn3XeMi0zIpGwix/boY3t6H/uaA7fz6t3rUQqT5+EuTqT3fk7il2SW
Vb73TEloJ50HGY1GlSZJJydYH7Y84WAvhNM+c8pWY3agS01O0kvhwII+/rZp
ODQbsoPO3m+cRrSduD4vDV/yKEgdjjeelYj3sTS2tONvitmrve/hczHpxSBO
pYGBkVY/TaGMzO1QuaOS0C8VB9nstVEk4XpC8lCODghn3CEC8052R0FJOjKi
C5eARsTCKOxBgR0CiRoYcIjxOim9+tM9IsOHhiDyifjlCD2dsg5jMrwVFDvu
GKzyOtY42m3aQ7+q8vU64LTSpakqOLakhIewuwqiLTLz1aXi4rPCQiyfOn+L
iEPcaYHZVDe7NVoe+dGsAibq1ZAnjSV58e+h67VC9GsE5uVJcOhZnIQVvkX7
ABP+PBa2xzp44kUxTHudfJzE7wY9wnyW62oXWF4lBonkOahlTJPSjeov7DL7
XroEdozSncLjr42LZ3kcAG84DaJHoJKJ4nm+3mSZMmgOUt2wIupoCXB5UKNi
iFiOLlnN+9cw+9blbT3q1j1Ir3mAGUxAerw4fEPG/EHatsuDUOMS/9xPGT4E
sCnZn6zekO1OcvpKIH2SdtQEn/QR8PWbkINKI4Pcd1j5rFM8vH8sJZcSBcxT
oz0y2Jbxk0xJbX1Qlk41aVc0roKcBBrdAKPRBWOgiaw6e6nfBKKe396+tpDy
Z7fxZs9eUfwRTWN7t19eTKwOwuPnkt7RYTtJaYsfZDwgMmVfWCjsrJwH7Gn+
S8p9Unj+5Dnbv8US6KlTwjJ/SFvzlDwLzxBK+7RD6ISHY1knTXzB0sXOr6Rh
SPVe81FfZM+OYZm04+j9L7Nnpx31xTz7Zi+hjAmdrFVYp4f1Yw84drbdHWje
dKSxkImFfpjAshjT4NI+Oh8MSJ7Q63Cstp005jGVA4G5OmjiGbtiIZ2Reikv
U9bEJvt+ATt26pO6SVPep76WcqJr0HcIamXDgsSVHqsddxT6rGfIktsXFNIs
M12Gs0R7+rGINiZY/PcirPDAo0H++xpWbYif0vC7mtpxOGGElhUjB0lIkEqp
bp8wS3v0QxpMzYD6KGP9qAXTvtxhdcEAk5jAXDCXILmpYaG9lP3pftnJ5Ju0
QzEuOfbKxU9QCIMlcGDXmpwfV9txeCgiHrrQ5H65VcHUTiPpYz1qgQz8ii/n
baLTJ5p0/dlTShyFsS9jP4vIx+gElC8kSFuE8Es4QLPuv7ax3GCzRnc7FKl7
lgMHWt3a1/mW8TC9qCUugokIAC7BseEgYeP76/h9gXzZNl2XfA4tyjyG1++C
SOcb7UCflxZC6JHX/B6UOdFdEPr+rFWikMpHw1h2AzxD1zQN4ld2phCxrs+v
BybtxgaRgTDYxWINRGk3o8/EEW2zR9YO8rGzmkyxbO+C4W2lb9g36KTuFsMH
07QyHlmmk3plHZwWj6ck1SO1ypqQJ/DNRf4rUSPejxqcxnHf8zSX65s7GB8+
9/XzbyVTefaQ0/PTjVEpKjTOTa2YzNRotM4dc+9CbbzGM8lnetZE2jWStSOF
IaI1BPRmD2mojZyyfUCuWjsMsbn317ejxDWnpEqanixc0V0cdwvYd06EkS2b
3NgJN8uHtX2HJv3oiCLJUIpcNvyETMLzXe5t0C7v+Q0ZVXJfjvTKG6j0284M
p5leZu6O63Yh4I9dwXxEDT8lqbQWhKS/4FzBUwNgFS07DTquwGqzvXYAHBQy
k6qlVw3thwZKEWX2n/Gwzwmxp4HqqQ7Af0JRHbBljcvO2s2kQWeAiMjnSbS+
L+2302yz3zUmSknwYgJjwC+2bLNo0oUek6gijc+PaD4tWOMIHLUpjprte6FE
JmgO2Buw0L6rhQtfHmJiSbp9eEU0K2LfQ6bNJUrZlZqM6Nzo8yS+IQsQGhjX
mpxAOesiCn5pxaMiooVynL5tFjJE5zu02HBJN0qIDA/CdikrvPoYh5Vu0Yyk
gyup36tNNhM6E7svXUkhvWJH4AnQB0sSx3KmoXIeIvaJbf8xoYM0d+gFDU0W
B98ao/pqizVpe6qFSpLn7IoRdkh/Kwu1eK9uCEuTb2Uw5sCeoKMGKg/aA4S+
kLwPcmS+QmDm8Tu4vW9olhLspFLk6zsJopfiXDhKfVcW0lxktWwmLnKQR75U
SvPRH53UHFW+Mdo9a24eY/uKtxXp34ENUhC5Jlor/Icv5QD9y7Rv6odaO9ZA
u28DFa+hn5cBqWvbzkFreNJActC6ZbZ4bNNP2+0L+76Gdu8QWmoTJAaF4d7y
27apU1RpTxv4Qv8ekUnZtcOu9wWHtBv3NXizzmPt4pS8WC+VPyEcC5dpu7f1
3wEnWedyQJ2jvr98TeFNQM2oe1yqMO1+1g51YluZ6cedRcMOgNHy5WMC6XdW
r8Ppn7dl96G79AK6D0f0R6TEUigNIRJIeojK9sBKCMR01o9Ev4PtlbvBN+yO
CAkTt9dsOFeOOx+NXvYqWAh3oJ0SvpI1Q1SHeWfSLJMUh8Udh/6y8KFW+drF
zdV3V0dyPP6+DQPsutEn84CgZrOZRAMS7C/9l7qEqJMfL/VTMK7445MV8Lxj
WVAG9Sdt+RmUpk2Ogn/nj5cxmfP2Ny/8Z4qoRtcIuXI9BXxNGS8XA7+3cnqa
/6EPuS3/+c/JN678N+Xq7zCPmIoJhvB1rOsGWnTbF3M891K/HQrojFf/d/Xx
zw/2Aj9MzAHzmp8Z/pVj7g+Gsz//B5mlv+3TWwAA

-->

</rfc>

