2021
1.
Nembhard, Fitzroy D.; Carvalho, Marco M.
A Smart and Defensive Human-Machine Approach to Code Analysis Proceedings Article
In: First International Workshop on Artificial Intelligence, IJCAI-ACD 2021, ijcai.org, 2021.
Abstract | BibTeX | Tags: agent, Google Assistant, NLP, virtual assistant, voice assistant, vulnerability detection
@inproceedings{IJCAINembhardCarvalho21,
title = {A Smart and Defensive Human-Machine Approach to Code Analysis},
author = {Fitzroy D. Nembhard and Marco M. Carvalho},
year = {2021},
date = {2021-08-20},
urldate = {2021-08-20},
booktitle = {First International Workshop on
Artificial Intelligence, IJCAI-ACD 2021},
publisher = {ijcai.org},
abstract = {Static analysis remains one of the most popular approaches for detecting and correcting poor or vulnerable program code. It involves the examination of code listings, test results, or other documentation to identify errors, violations of development standards, or other problems, with the ultimate goal of fixing these errors so that systems and software are as secure as possible. There exists a plethora of static analysis tools, which makes it challenging for businesses and programmers to select a tool to analyze their program code. It is imperative to find ways to improve code analysis so that it can be employed by cyber defenders to mitigate security risks. In this research, we propose a method that employs the use of virtual assistants to work with programmers to ensure that software are as safe as possible in order to protect safety-critical systems from data breaches and other attacks. The proposed method employs a recommender system that uses various metrics to help programmers select the most appropriate code analysis tool for their project and guides them through the analysis process. The system further tracks the user's behavior regarding the adoption of the recommended practices.},
keywords = {agent, Google Assistant, NLP, virtual assistant, voice assistant, vulnerability detection},
pubstate = {published},
tppubtype = {inproceedings}
}
Static analysis remains one of the most popular approaches for detecting and correcting poor or vulnerable program code. It involves the examination of code listings, test results, or other documentation to identify errors, violations of development standards, or other problems, with the ultimate goal of fixing these errors so that systems and software are as secure as possible. There exists a plethora of static analysis tools, which makes it challenging for businesses and programmers to select a tool to analyze their program code. It is imperative to find ways to improve code analysis so that it can be employed by cyber defenders to mitigate security risks. In this research, we propose a method that employs the use of virtual assistants to work with programmers to ensure that software are as safe as possible in order to protect safety-critical systems from data breaches and other attacks. The proposed method employs a recommender system that uses various metrics to help programmers select the most appropriate code analysis tool for their project and guides them through the analysis process. The system further tracks the user's behavior regarding the adoption of the recommended practices.
2019
2.
Slhoub, Khaled; Carvalho, Marco; Nembhard, Fitzroy
Evaluation and Comparison of Agent-Oriented Methodologies: A Software Engineering Viewpoint Proceedings Article
In: 2019 IEEE International Systems Conference (SysCon), pp. 1-8, 2019.
Abstract | Links | BibTeX | Tags: agent, AOSE, MaSE, PASSI, Prometheus, software engineering, software quality, software requirements, standards, SWEBOK
@inproceedings{AOSEEvaluation,
title = {Evaluation and Comparison of Agent-Oriented Methodologies: A Software Engineering Viewpoint},
author = {Khaled Slhoub and Marco Carvalho and Fitzroy Nembhard},
doi = {10.1109/SYSCON.2019.8836962},
year = {2019},
date = {2019-04-08},
urldate = {2019-04-08},
booktitle = {2019 IEEE International Systems Conference (SysCon)},
pages = {1-8},
abstract = {Numerous agent-oriented methodologies that offer a rich pool of resources to support developers of agent-based systems have been proposed. However, the use of existing methodologies in industrial settings is still limited due to the large volume of methodologies, diversity of covered scopes, ambiguity in concepts, and lack of maturity. This makes it difficult for agent technology practitioners to choose the appropriate methodology that best fits their given development context. To eliminate such agent-based development bottleneck, it is important to introduce suitable methods for evaluating, comparing, and classifying agent-oriented methodologies in order to leverage their usage among practitioners. Having systems to evaluate methodologies can effectively help developers better understand existing methodologies, realize their benefits, outline their pros and cons, and assist practitioners with selecting the best-fit methodology for a specific agent-based project. In response, this paper proposes a novel criteria-based evaluation that is influenced by software engineering practices to assess and compare agent-oriented methodologies. The proposed evaluation is derived from the software engineering body of knowledge (SWEBOK) and provides a simplified method to assess the coverage degree of an agent-oriented methodology with respect to major software knowledge areas such as the requirements and testing phases. We demonstrate the applicability of the proposed evaluation by applying it to three agent-oriented methodologies (PASSI, MaSE, and Prometheus) in the software engineering requirements and testing phases.},
keywords = {agent, AOSE, MaSE, PASSI, Prometheus, software engineering, software quality, software requirements, standards, SWEBOK},
pubstate = {published},
tppubtype = {inproceedings}
}
Numerous agent-oriented methodologies that offer a rich pool of resources to support developers of agent-based systems have been proposed. However, the use of existing methodologies in industrial settings is still limited due to the large volume of methodologies, diversity of covered scopes, ambiguity in concepts, and lack of maturity. This makes it difficult for agent technology practitioners to choose the appropriate methodology that best fits their given development context. To eliminate such agent-based development bottleneck, it is important to introduce suitable methods for evaluating, comparing, and classifying agent-oriented methodologies in order to leverage their usage among practitioners. Having systems to evaluate methodologies can effectively help developers better understand existing methodologies, realize their benefits, outline their pros and cons, and assist practitioners with selecting the best-fit methodology for a specific agent-based project. In response, this paper proposes a novel criteria-based evaluation that is influenced by software engineering practices to assess and compare agent-oriented methodologies. The proposed evaluation is derived from the software engineering body of knowledge (SWEBOK) and provides a simplified method to assess the coverage degree of an agent-oriented methodology with respect to major software knowledge areas such as the requirements and testing phases. We demonstrate the applicability of the proposed evaluation by applying it to three agent-oriented methodologies (PASSI, MaSE, and Prometheus) in the software engineering requirements and testing phases.