/
Reliability of Autonomous IoT Systems With Intrusion Detection Attack Reliability of Autonomous IoT Systems With Intrusion Detection Attack

Reliability of Autonomous IoT Systems With Intrusion Detection Attack - PowerPoint Presentation

phoebe-click
phoebe-click . @phoebe-click
Follow
344 views
Uploaded On 2019-12-12

Reliability of Autonomous IoT Systems With Intrusion Detection Attack - PPT Presentation

Reliability of Autonomous IoT Systems With Intrusion Detection Attack Authors DingChau Wang IngRay Chen and Hamid AlHamadi Presented by Abdulaziz Alhamadani and Mohammad Raihanul Islam Outline ID: 770160

system node attack ids node system ids attack defense nodes bad malicious voting game payoff good outcome mttf energy

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Reliability of Autonomous IoT Systems Wi..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Reliability of Autonomous IoT Systems With Intrusion Detection Attack Authors: Ding-Chau Wang, Ing-Ray Chen, and Hamid Al-Hamadi Presented by: Abdulaziz Alhamadani and Mohammad Raihanul Islam

Outline Introduction: Motivation Basic idea System Model Analyzing the Attack Defense Game Case 1 when life quota = 1 Case 2 when life quota = 2...n Results Conclusion

Introduction Motivation Over the recent years autonomous IoT applications are on the rise Cyber physical systems Air-quality measurement Parking space finding Information from the collaborating devices must be trustworthy A centralized intrusion detection system (IDS) is not practical because of the sheer number of devices A distributed IDS is a feasible way to approach this problem

Introduction (contd.) Basic Idea A lightweight IDS based on attack-defense game based on reverse game theory Every node participate to obtain incentives and promote the interest of the designer Overall idea is an target node is periodically audited by a group of neighbor node to determine whether it is malicious or not A node uses a basic host-level IDS functions to cast a yes or not vote on the target node Yes (no) means the node is good (bad) IDS is characterized by host-level false positive and false negative rate

Introduction (contd.) Malicious node can intentionally vote no for a good node and vice versa To identify this behavior defense system can perform auditing after IDS voting to see who cast different vote from the audit outcome As a result, every malicious node has to decide if it should attack or not attack in the IDS voting system Contribution of the paper The paper aims to determine the condition under which malicious nodes have not incentives to attack during IDS voting An analytical model is developed based on SPN techniques that can analyze the effect of attacking, auditioning and the penalty of being identified as malicious The model is put to use in an autonomous mobile CPS wherein each node is given a “life quota” to remain in the system.

System Model Those Failure Conditions can cause an Autonomous IoT System to fail: Byzantine Failure: occurs when ⅓ rd or more nodes are compromised. It is impossible to reach a consensus if ⅓ rd are compromised. Energy Depletion Failure: occurs when energy is too depleted to be able to accomplish the mission. It is important for collaborative systems that must complete the mission within the deadline before depletion.

System Model: IDS Attack-Defense Game The Attacker behavior comes in 2 forms: 1- derives from “capture” attacks to compromise nodes (turn a good node to a bad one. A per-node capture rate “ƛ” is assumed. Can happen for Sensor/Actuator devices when they are physically captured by an inside attacker and converted into a malicious node. It is possible a virus can turn a good node into a malicious node.

System Model: IDS Attack-Defense Game 2 - Attacker behavior derives from insider attacks during IDS voting. An insider may only attack probabilistically to evade detection. Bad node decides: Attack (P a ) or Not Attack (1 - P a ) during IDS voting IDS game design discourages malicious nodes from performing attacks. P a = 0 obtain the max life time.

System Model: IDS Attack-Defense Game The Defense behavior comes in 2 forms: 1- Host Level: A node i monitors negative and positive experiences it had towards node j (neighbor node) to judge if it complies with prescribed execution protocols. Anomaly detection including discrepancy of voting is used. Node i uses 𝞫(a,b) distribution to model compliance degree of node j. The range is (0,1). a=positive, b=negative. The estimated mean compliance = a/(a+b) If (node j’s compliance < min compliance degree C T ) then, node j is bad, else node j is good. min compliance degree C T decides the host-level false negative probability H pfn and host-level false positive probability H pfp

System Model: IDS Attack-Defense Game 2 - System level: the detection strength is controlled by (a) The number of voters m. (b) how often intrusion detection is performed (every T IDS interval). How it works? In each IDS voting cycle: m neighboring nodes to target node will participate and vote for or against it (based on host-level outcomes). If majority votes “no”, then node is bad and will be evicted; otherwise, it is retained. To preserve energy, Audit the outcome with Probability P c or No Audit 1-P c . To punish misbehavior, the defense system penalizes nodes that cast a different vote from the auditing outcome. The system designer determines the severity during the setup.

IDS Attack-Defense Game Based on Mechanism design theory (reverse game theory). It models decision making between the attacker and the defense system with analysis. The game models the relationship between the defense system and malicious node i: Node i has 2 options: attack or Not attack during IDS voting. Defense System: Audit the voting result P c or No Audit 1-P c

IDS Attack-Defense Game Before Explaining the table, we need to know the following: We established that if a malicious can attack with prob P a or not attack 1-P a . If it wants to attack: Casts a vote “no” against a good target, and “yes” for a bad target. This will Evict or retain the target node. If it decides not to attack, it will behave like a good node. The defense system decides to audit with prob P c or not with prob 1-P c . Notation: cost C . The system will have to collect relevant information from all nodes who have had experiences with the target node. If the target node is immobile, nodes may be small. a highly mobile node,the high cost is unavoidable in order to ensure that “the auditing outcome” reflects “the true outcome” of whether the target node is malicious or not.

IDS Attack-Defense Game Payoff 4 cases. Defender Strategies Attacker Strategies Attack with Prob P a Not Attack with Prob 1-P a Audit with Prob P c Case1: L c a - C, -L c a Case2: L c na - C, -L c na No audit with Prob 1-P c Case3: -G nc a , G nc a Case4: -G nc na , G nc na

IDS Attack-Defense Game Payoff: Case 1 Case 1: Defense system decides to Audit the voting outcome with P c and malicious node i decides to attack with P a . Defense system detects true outcome (good or bad target node) by collecting reports from all relevant nodes then punish the ones cast a different vote with a penalty ( L c a >= 0). C = the system “checks” the voting outcome. a = malicious node i “attacks” during IDS voting. The loss to (Malicious node) is gain to the defense system. Payoff to the defense system = L c a - C Payoff to the malicious node i = -L c a .

IDS Attack-Defense Game Payoff: Case 2 Case 1: Defense system decides to Audit the voting outcome with P c and malicious node i decides NOT to attack with 1-P a . Defense system detects true outcome (good or bad target node) by collecting reports from all relevant nodes then punish the ones cast a different vote. Malicious Node Does not receive a penalty here because it acts as the good one. The defense system gains something positive in reliability ( L c na >= 0) c = the system “checks” the voting outcome. na = malicious node i “not attack” during IDS voting. The gain to the defense system is loss to (Malicious node). Payoff to the defense system = L c na - C Payoff to the malicious node i = -L c na .

IDS Attack-Defense Game Payoff: Case 3 Case 1: Defense system decides NOT to Audit the voting outcome with 1-P c and malicious node i decides to attack with P a . Defense system’s voting outcome will be accepted as is which may impact the system reliability. The impact’s notation ( G nc a >= 0) nc = the system does “not checks” the voting outcome. a = malicious node i “attacks” during IDS voting. The gain to (Malicious node) is a loss to the defense system. Payoff to the defense system = -G nc a Payoff to the malicious node i = G nc a .

IDS Attack-Defense Game Payoff: Case 4 Case 1: Defense system decides NOT to Audit the voting outcome with 1-P c and malicious node i decides NOT to attack with 1-P a . Because of no auditing, the Defense system’s voting outcome will be accepted as is which may adversely impact the system reliability. The impact’s notation ( G nc na >= 0) nc = the system does “not checks” the voting outcome. na = malicious node i does “not attack” during IDS voting. The gain to (Malicious node) is a loss to the defense system. Payoff to the defense system = -G nc na Payoff to the malicious node i = G nc na .

Theorem & Proof The condition: P c (L c a - L c na ) >= (1-P c )(G nc a -G nc na ) must be satisfied to discourage malicious node i from performing attacks during IDS voting. As seen in table 1: The Payoff na when a malicious node does not attack during IDS voting = - P c L c na +(1 - P c )G nc na The Payoffa when a malicious node attacks during IDS voting = = -PcLcna +(1-Pc)GncnaTo Guarantee a malicious node i does not have the incentive to attack we have PAYOFFna >= PAYOFFaBy rearranging the equation, we have a general rule for the design of the loss and gain payoff functions:

Theorem & Proof - defense system auditing probability must be at least greater than the outcome of the right-hand side expression to discourage malicious nodes from performing attacks during IDS voting. - The right-hand side expression outcome depends on the L and G payoff functions which can be publicize to all nodes such that a malicious node will have no incentive of performing attacks during IDS voting.

Modeling and Analysis Recall G nc na is the payoff to malicious node that perform an attack without being detected since the system did not perform auditing This is the highest impact achievable by a malicious node and payoff is 1(gain of life quota) L c a is applied when a node is detected as casting a vote which different from the auditing outcome Payoff is quota decay parameter β L c na is applied when a malicious node do not attack while system perform auditing Payoff is zero as there is no reduction in life quota G nc na is the payoff when there is not attack and no audition Payoff value is set to 0 as there is not reduction in life quota

Analyzing the Attack Defense Game All good nodes are initially put in N g as tokens Good nodes are compromised with a rate λ. The transition will move tokes from N g to N b . N b is a place where compromised by undetected nodes are kept Good nodes can be misidentified as bad auditioning is not performed Nodes are placed from N g to N e with rate Ng x P IDS fp /T IDS , P IDS fp is the system level false-positive probability, T IDS is the interval of IDS.

Analyzing the Attack Defense Game (contd.) If the audition of a bad node is no it is evicted from the system 1-P IDS fn is probability of happening is where P IDS fn is the system-level false negative probability The transition rate is N b x(1-P IDS fn )/T IDS On the other hand if system misidentifies bad node as good it will remain in the system Transition rate is N b x(P IDS fn )/T IDS

Analyzing the Attack Defense Game (contd.) If a node attacks and system do an audition (prob. is P a P c ) it will identify the bad node and a reduction of life quota will be applied As β=1 it will be evicted and put to N e On the other hand with prob. 1-P a P c a bad false-negative node can flow to N b . In summary if a bad node attack and system audits the node will flow from N b to N e . It will remain in N b in all other conditions.

Analyzing the Attack Defense Game (contd.) The capability of the model is measured by system-level false positive and false negative probability They depends on the host level host level false positive and false negative probability # of active bad nodes N a bad =P a N b , # of inactive bad nodes N i bad =(1-P a )N b A false positive will occur when majority vote is no against a good node

Analyzing the Attack Defense Game (contd.) The first term in the equation accounts for a case when more than half of the neighbors of a target node are active bad nodes who always vote against the good target nodes Denominator is total # of ways to select m nodes from the neighbor nodes Numerator is total # of ways to select at least m maj bad voters out of N a bad voters and rest of them from inactive bad and good nodes

Analyzing the Attack Defense Game (contd.) The second accounts for cases when more than half the nodes are good node but some of them mistakenly misidentify the target node as bad node As a result more than half the vote is against the good node The denominator is same as the first term

Analyzing the Attack Defense Game (contd.) The numerator is the total # of combinations to select i “active bad” nodes not exceeding m maj, j good or inactive bad voters who incorrectly vote against the target node make i + j >= m maj and the remaining m -i -j active voters who vote correctly

Analyzing the Attack Defense Game (contd.) Now we show a model where β =1/2 A bad node get another chance after getting detected during an audition (lose half of its life quota) If it perform another offense it will be evicted from the system Structure is similar as before but it here is one more layer has been added where the identified bad nodes are pushed to the next layer (with prob. P a P c ) The model can be generalized If β =1/n then there will be n layers in the SPN model

Results: System Application & Setup To test the design, it was applied on an autonomous mobile CPS consisting of different devices. Sensors-carried human actors, vehicles, and robots. The devices are assembled together for running a mission in battlefield or emergency response situation.

Results: System Application & Setup The testing environment conditions and IDS attack-defense strategies: N = 128 in a team. Randomly moving. They cover R=250 m radio range. Neighbors at time t = (Nodes in same location at time t). All nodes have an equal chance to be captured by outside or virus attacks, then compromised, then attack with prob P a . The per-node capture rate = ƛ. Voting is periodic: in every T IDS interval with m (# neighbor nodes → majority voting). an energy model was followed to consider the cost of each IDS voting cycle & the cost of each defense system audit. Due to high cost of auditing, defense system performs it occasionally P c . P c prob is varied to test its effect on performance. P Min c =1/(1+𝜷) to fulfill eq.5. Each node has a host-level anomaly-based IDS marked by H pfn & H pfp prob. Byzantine failure or energy depletion failure will cause the autonomous IoT system to fail.

Results: System Application & Setup The table lists the attack-defense strategy parameters for the system. The performance metric is the system reliability expressed as MTTF. To get numerical results, the SPN is parameterized as (𝞫=1) and (𝞫= ½). Then, the SPN model is run through the SPNP tool to get the MTTF as output.

Results: Effect of TIDS on MTTF When T IDS is too small: Intrusion Detection is too frequent. It exhausts its energy → a small lifetime. As T IDS increases, system saves more energy → lifetime increases. When T IDS is too large: Saves more energy BUT fails to catch bad nodes → many bad nodes (if ⅓ bad nodes) → Failure. The optimal T IDS value decreases as the attack prob P a increases. Why? (P a ↑, IDS voting ↑) → bad nodes out, good nodes stay → less failure. The System has the highest MTTF when P a =0. No attack from bad nodes during IDS voting means less Failures. The IDS attack-defense game discourages nodes from attacking.

Results: Effect of Pc on MTTF Case P c =0.5: MTTF is obtained by The proposed system. The MTTF is the best (highest) value (P min c =0.5 so bad nodes are discouraged from attacking). Case P c =0: MTTF is obtained by the baseline mobile CPS. P c increases → MTTF decreases. Except P a =0. energy wasted(audit but no attack). Given a P a value, there is an optimal P c value that max the MTTF. The optimal P c values are when P a <=½, P a =0.6, P a =7 & P a =1. When P a is low, auditing (wastes more energy than more evictions of bad nodes → low optimal P c value. When Pa is high, auditing (evicts more bad nodes than wasting energy) → high optimal Pc value. When Pa =1 (attack all the time), Audit every IDS voting outcome to delay failure but consumes more energy.

Results: Comparing Models MTTF loss occurs:P a is low <=0.5 (energy wasted on auditing). MTTF gain occurs:P a is high>=0.5(removal of maliciou nodes). When Pa is high, the design outperforms the baseline system. When bad nodes attack with more frequent auditing to prevent failure. When P a is low, the system performs worse than the baseline system. It gets worse when P c is higher due to energy wasted in auditing. This means: The system is more effective in highly hostile and reckliness environment.

Setting up Pc In practice, we do know P a . Thus, it is not possible to dynamically set P c to its optimal value to maximize system MTTF. The paper suggests P min c =1/(1+𝞫)=½ to discourage bad nodes from attacking during IDS voting. Based on the 50% system auditing, there are two possibilities for bad nodes to react to the design: (1) Bad nodes are sensible and they follow the payoff logic of the game design. Otherwise, payoff is < 0 → P a will be forced to be set to 0 → system reach max MTTF. (2) Bad nodes are insensible and do not follow the IDS game design. Still, the system MTTF is improved by trading off energy to achieve a higher true positive rate and lower false positive rate.

Conclusion The paper pioneered the concept of IDS attack-defense games to push nodes to cooperate with the IDS to maximize the system reliability of autonomous IoT systems. An analytical condition is derived under which the node will not have the incentive to attack during IDS audition Author also depict the practical use of IDS in a mobile CPS where each device is given a life quota The IDS game design improves MTTF by trading off energy to perfect the detection rate especially in hostile environment

Conclusion (contd.) A future direction can be to apply this system where nodes are heterogenous This requires a hierarchical SPN where Lower level deals with diverse attack-defense behavior Higher level describe the aggregate behaviors and system responses