Satyam Mishra
Satyam Mishra is a passionate and results-driven AI Engineer/Researcher with over 4+ years of experience spanning professional and academic settings. He specializes in LLMs, NLP, Machine Learning, computer vision, IoT, and embedded systems.
Satyam has worked on intriguing projects including building an automated self-driving robot car using deep learning, implementing object detection and dimension measurement systems achieving over 98% accuracy, and enhancing enterprise security through contactless facial authentication.
|
C | ⭐⭐⭐⭐⭐ | C++ | ⭐⭐⭐⭐⭐ |
LLMs | ⭐⭐⭐⭐ | Basic Operating System | ⭐⭐⭐⭐ |
UART, USB, I2C, SPI etc. | ⭐⭐⭐⭐ | Software Development Life Cycle (SDLC) | ⭐⭐⭐⭐ | SQL Server Integration Services (SSIS) | ⭐⭐⭐⭐⭐ | Java | ⭐⭐⭐⭐ |
SAP BPC | ⭐⭐⭐⭐ | Network Programming using C | ⭐⭐⭐⭐ |
NLP | ⭐⭐⭐⭐ | Processor Architecture Concepts | ⭐⭐⭐⭐ |
Research Skills | ⭐⭐⭐⭐⭐ | Academic Writing | ⭐⭐⭐⭐⭐ |
Linux/Linux Device Driver | ⭐⭐⭐⭐ | Data Structure and Algorithms | ⭐⭐⭐⭐ |
Penetration Testing Web Apps with Kali and BurpSuite | ⭐⭐⭐⭐ | Web Development | ⭐⭐⭐⭐ |
Github/Git | ⭐⭐⭐⭐⭐ | RISC-V/ARM | ⭐⭐⭐⭐ |
Embedded Programming | ⭐⭐⭐⭐ | Langchain | ⭐⭐⭐⭐ |
Docker/Kubernetes | ⭐⭐⭐⭐ | Proteus | ⭐⭐⭐⭐ |
Lumerical FDTD | ⭐⭐⭐⭐⭐ | Lumerical Interconnect | ⭐⭐⭐⭐⭐ |
Lumerical Mode | ⭐⭐⭐⭐⭐ | Lumerical Device | ⭐⭐⭐⭐⭐ |
OpenCV | ⭐⭐⭐⭐⭐ | Python | ⭐⭐⭐⭐⭐ |
PyTorch | ⭐⭐⭐⭐⭐ | TensorFlow | ⭐⭐⭐⭐⭐ |
Pinecone | ⭐⭐⭐⭐⭐ | HuggingFace | ⭐⭐⭐⭐⭐ |
LLAMA | ⭐⭐⭐⭐⭐ | Streamlit | ⭐⭐⭐⭐⭐ |
Sagemaker | ⭐⭐⭐⭐⭐ | APIs in General (REST, OpenAI etc.) | ⭐⭐⭐⭐⭐ |
Dates | Work | Details |
---|---|---|
Feb, 2023 - Present | AI Engineer at Verysell Group Applied AI Lab (SmartDev LLC), A Verysell Group Company | • Architect and develop AI applications with a focus on versatility, meeting the diverse requirements of various industries. • Implement cutting-edge AI technologies across software projects to enhance functionality, automate tasks, and deliver advanced solutions tailored to client needs. • Design and refine user interfaces, prioritizing intuitive and seamless user experiences. • Continuously update and optimize software systems to align with industry trends and advancements, ensuring our technology remains at the forefront of innovation. • Collaborate with cross-functional teams to address challenges, troubleshoot issues, and drive continuous improvement in AI-driven projects. |
Dec, 2023 - Present | Education Software Engineer at Vision Mentors LLC | • Provide technical training so that beginners can adapt to companies stack. • Design and create applications tailored for the education sector. • Integrate artificial intelligence (AI) technologies into educational software to enhance personalized learning experiences, automate administrative tasks, and provide adaptive feedback to students. • Creating intuitive and user-friendly interfaces. • Constantly refining and updating software to align with the evolving needs of educators and students, fostering a dynamic and engaging educational environment. |
Feb, 2023 - July, 2023 | Research Intern at VNU Information Technology Institute SISLAB - Vietnam National University, Hanoi |
1. Designing, implementing, and testing RISC-V multicore-based IoT systems. 2. Developing and integrating hardware and software components. 3. Ensuring high-quality deliverables through thorough system testing and debugging. 4. Collaborating with cross-functional teams for seamless integration of components. 5. Staying updated with emerging IoT and embedded systems technologies and trends. 6. Creating and maintaining technical documentation and reports. 7. Participating in project planning, estimation, and progress tracking. 8. Contributing to innovative solutions for IoT system challenges. |
July, 2013 - Nov, 2016 | Software Engineer at Mahila Khadi Gramya Sewa Sansthan, India | 1. Developed and maintained Software EShakti in government funded project by NABARD. 2. Managed and updated content on the organization's website. 3. Developed innovative ideas to promote social campaigns. 4. Implemented and maintained promotional content on the website. 5. Collaborated with teams to ensure consistent messaging. 6. Assisted in designing and enhancing website visuals. 7. Executed online marketing strategies and advertisements. 8. Created campaign-specific web pages and multimedia integration. 9. Coordinated with various departments for campaign alignment. 10. Ensured accurate and up-to-date information on the website. 11. Contributed to a seamless user experience on the website. 12. Participated in time-sensitive campaign execution. 13. Demonstrated strong attention to detail in content management. 14. Provided IT software knowledge training to office employees. 15. Mentored colleagues to enhance their software proficiency. 16. Shared insights and best practices for efficient computer use. 17. Created and delivered training materials for software learning. 18. Supported employees in troubleshooting basic IT issues. 19. Fostered a tech-savvy and efficient office environment. |
As an author of the research work titled "Automated Robot (Car) using Artificial Intelligence" published in IEEE and Scopus database, I played a crucial role in the conception, design, implementation, and analysis of the research work. My responsibilities as an author included, but were not limited to:
Research Design: I played a significant role in the development of the research methodology, which includes designing the automated robot and selecting the appropriate artificial intelligence algorithms.
Data Collection: I have been responsible for collecting data that is relevant to the research question and that is in line with the study's objectives.
Analysis and Interpretation: I have played a critical role in analyzing the data collected and interpreting the findings in line with the study's objectives.
Writing and Editing: I have been responsible for writing and editing the research work, ensuring that the article is clear, concise, and in line with the guidelines and standards of IEEE and Scopus database.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives.
Submission and Publication: I have been responsible for submitting the research work to the IEEE and Scopus database and ensuring that it is published in a timely manner.
Presentation: I have been responsible for presenting My research findings at the ISMODE 2021 conference, demonstrating My ability to communicate My research work in a clear and concise manner to my peers and other interested parties.
Overall, as an author of the research work, I have contributed significantly to the advancement of the field of artificial intelligence and automated robotics.
As the author of the research work titled "Lightweight Authentication Encryption to Improve DTLS, Quark Combined with Overhearing to Prevent DoS and MITM on Low-Resource IoT Devices," published as a chapter in Springer books and indexed in Scopus Q2, I have played a critical role in the design, implementation, and analysis of the research work. My technical responsibilities as an author included, but were not limited to:
Research Design: I have played a key role in the development of the research methodology, including designing the lightweight authentication encryption mechanism and integrating it with Datagram Transport Layer Security (DTLS), Quark, and overhearing techniques to improve the security of low-resource Internet of Things (IoT) devices against Denial-of-Service (DoS) and Man-in-the-Middle (MITM) attacks. This involved understanding the requirements of low-resource IoT devices, selecting appropriate cryptographic primitives, designing and implementing an efficient authentication and encryption mechanism, and testing the mechanism under realistic conditions.
Data Collection: I have been responsible for collecting data through simulations, experiments, or analysis of existing datasets, to evaluate the performance of the proposed mechanism and compare it with existing methods. This involved developing realistic test scenarios, designing experiments, and collecting and analyzing data on metrics such as throughput, delay, energy consumption, and security.
Analysis and Interpretation: I have played a crucial role in analyzing and interpreting the data collected, including comparing the performance of the proposed mechanism with existing methods, identifying strengths and weaknesses of the proposed mechanism, and suggesting areas for future research.
Writing and Editing: I have been responsible for writing and editing the chapter, ensuring that it meets the technical and editorial standards of Springer books and Scopus. This have involved describing the research problem, outlining the research methodology, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Submission and Publication: I have been responsible for submitting the chapter to Springer books, ensuring that it meets the formatting and style requirements, and addressing any comments or feedback from the editors and reviewers.
Presentation: I have been responsible for presenting My research findings at the ICIOT 2022 conference, demonstrating My ability to communicate My research work in a clear and concise manner to my peers and other interested parties.
Overall, as the author of this research work, I have contributed significantly to the development of a lightweight authentication encryption mechanism for low-resource IoT devices, which improves their security against DoS and MITM attacks. My technical expertise in cryptography, IoT security, and performance evaluation, as well as my attention to detail and critical thinking, have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the design and deployment of secure and efficient IoT systems.
As the author of the research work titled "SATMeas - Object Detection and Measurement: Canny Edge Detection Algorithm," published as a chapter in Springer books and indexed in Scopus Q2, I have played a crucial role in the development, implementation, and analysis of the research work. My technical responsibilities as an author have included, but were not limited to:
Research Design: I have played a key role in designing the research methodology, which includes developing the SATMeas system for object detection and measurement based on the Canny edge detection algorithm. This have involved understanding the requirements of object detection and measurement, selecting appropriate algorithms and techniques, and designing an efficient and accurate system that can be applied to real-world scenarios.
Data Collection: I have been responsible for collecting data to evaluate the performance of the SATMeas system. This have involved designing experiments, selecting appropriate test images, and collecting and analyzing data on metrics such as accuracy, precision, and recall.
Algorithm Implementation: I have been responsible for implementing the Canny edge detection algorithm and integrating it with the SATMeas system, ensuring that it is efficient and accurate, and that it meets the requirements of the research methodology.
Analysis and Interpretation: I have played a critical role in analyzing the data collected, evaluating the performance of the SATMeas system, identifying its strengths and weaknesses, and suggesting areas for improvement.
Writing and Editing: I have been responsible for writing and editing the chapter, ensuring that it meets the technical and editorial standards of Springer books and Scopus. This have involved describing the SATMeas system and its implementation, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Submission and Publication: I have been responsible for submitting the chapter to Springer books, ensuring that it meets the formatting and style requirements, and addressing any comments or feedback from the editors and reviewers.
Presentation: I have been responsible for presenting My research findings at the AIMS 2022 conference, demonstrating My ability to communicate My research work in a clear and concise manner to my peers and other interested parties.
Overall, as the author of this research work, I have contributed significantly to the development of the SATMeas system, which enables accurate and efficient object detection and measurement using the Canny edge detection algorithm. My technical expertise in computer vision, image processing, and algorithm implementation, as well as my attention to detail and critical thinking, have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the development of advanced object detection and measurement systems that can be applied in various fields, such as robotics, surveillance, and medical imaging.
As a co-author of the research work titled "Vaccination Inventory System for justified user using Natural Language Processing," published in AHFE 2023 and indexed in Scopus, I have played an important role in the development, implementation, and analysis of the research work. My technical responsibilities as a co-author have included, but were not limited to:
Research Design: I have contributed to the research design, which includes developing the Vaccination Inventory System that uses Natural Language Processing (NLP) to process user requests for vaccine inventory.
Writing and Editing: I have contributed to the writing and editing of the research work, ensuring that it meets the technical and editorial standards of Scopus. This have involved describing the Vaccination Inventory System and its implementation, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Submission and Publication: I have contributed to the submission of the research work to Scopus, ensuring that it meets the formatting and style requirements, and addressing any comments or feedback from the editors and reviewers.
Overall, as a co-author of this research work, I have contributed to the development of the Vaccination Inventory System, which enables the processing of user requests for vaccine inventory using NLP. My technical expertise in NLP, data analysis, and algorithm implementation, as well as my attention to detail and critical thinking, have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the development of advanced inventory systems that can be applied in various fields, such as healthcare, logistics, and retail.
As the author of the research work titled "Using Security Metrics to Determine Security Program Effectiveness," published in AHFE 2023 and indexed in Scopus, I have played a crucial role in the conceptualization, design, implementation, and analysis of the research work. My technical responsibilities as the author included, but were not limited to:
Research Design: I have been responsible for designing the research methodology, which includes developing security metrics that can be used to assess the effectiveness of security programs in organizations. This have involved understanding the security needs of different organizations, selecting appropriate metrics, and designing an efficient and accurate system that can be applied to real-world scenarios.
Data Collection: I have been responsible for collecting data to evaluate the performance of security programs in organizations. This have involved designing experiments, selecting appropriate datasets, and collecting and analyzing data on metrics such as vulnerability identification, risk assessment, and incident response.
Analysis and Interpretation: I have been responsible for the analysis of the data collected, evaluating the performance of security programs in organizations, identifying their strengths and weaknesses, and suggesting areas for improvement.
Writing and Editing: I have been responsible for the writing and editing of the research work, ensuring that it meets the technical and editorial standards of Scopus. This have involved describing the security metrics and their implementation, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Presentation: I have been responsible for presenting my research findings at the AHFE 2023 conference, demonstrating my ability to communicate My research work in a clear and concise manner to my peers and other interested parties.
Overall, as the author of this research work, I have contributed to the development of security metrics that can be used to evaluate the effectiveness of security programs in organizations. My technical expertise in security program evaluation, data analysis, and research design, as well as my attention to detail and critical thinking, would have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the development of advanced security programs that can be applied in various industries and domains, such as finance, healthcare, and government.
As a co-author of the research work titled "Detecting Stroke in Human Beings using Machine Learning," published in AHFE 2023 and indexed in Scopus, I have played a crucial role in the conceptualization, design, implementation, and analysis of the research work. My technical responsibilities as a co-author included, but were not limited to:
Research Design: I have been responsible for designing the research methodology, which includes selecting and applying machine learning techniques to detect the onset of a stroke in human beings.
Writing and Editing: I have been responsible for writing and editing the research work, ensuring that it meets the technical and editorial standards of Scopus. This have involved describing the machine learning techniques and their implementation, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Presentation: I have been responsible for presenting My research findings at the AHFE 2023 conference, demonstrating My ability to communicate My research work in a clear and concise manner to My peers and other interested parties.
Overall, as a co-author of this research work, I have contributed to the development of a machine learning model that can detect the onset of stroke in human beings, which can have significant clinical implications for the timely diagnosis and treatment of stroke. My technical expertise in machine learning, data analysis, and research design, as well as My attention to detail and critical thinking, would have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the development of advanced medical diagnosis and treatment methods that can be applied in clinical settings.
As a co-author of the research work titled "Predicting Breast Cancer in Human using Machine Learning," published in AHFE 2023 and indexed in Scopus, I have played a crucial role in the conceptualization, design, implementation, and analysis of the research work. My technical responsibilities as a co-author included, but were not limited to:
Research Design: I have been responsible for designing the research methodology, which includes selecting and applying machine learning techniques to predict the risk of breast cancer in human beings.
Writing and Editing: I have been responsible for writing and editing the research work, ensuring that it meets the technical and editorial standards of Scopus. This have involved describing the machine learning techniques and their implementation, presenting the results of the experiments and analysis, and discussing the implications and contributions of the research.
Review and Feedback: I have been required to review and provide feedback on the work of other authors, ensuring that their work is in line with the research objectives, and providing constructive feedback on the technical and methodological aspects of their work.
Presentation: I have been responsible for presenting my research findings at the AHFE 2023 conference, demonstrating My ability to communicate My research work in a clear and concise manner to My peers and other interested parties.
Overall, as a co-author of this research work, I have contributed to the development of a machine learning model that can predict the risk of breast cancer in human beings, which can have significant clinical implications for the timely diagnosis and treatment of breast cancer. My technical expertise in machine learning, data analysis, and research design, as well as My attention to detail and critical thinking, would have been essential for the successful completion of this research project. My research findings are expected to have practical implications for the development of advanced medical diagnosis and treatment methods that can be applied in clinical settings, and contribute to the efforts to reduce the incidence and mortality of breast cancer.
In this book chapter titled "Efficient Mask Detection System for Public Safety," I played a crucial role as an author, contributing significantly to the development and implementation of an innovative real-time face mask detection system. The chapter is indexed in the prestigious Scopus database, signifying its recognition and relevance in the field of computer vision and public safety.
My technical responsibilities included, but were not limited to:
Research Design: As a key contributor to this work, I actively participated in designing the research methodology, focusing on the development of a highly efficient and accurate face mask detection system. The goal was to address the challenges posed by the COVID-19 pandemic and contribute to public health and safety.
Algorithm and Architecture Selection: My role involved evaluating various deep learning algorithms and model architectures to identify the most suitable combination for real-time face mask detection. After rigorous experimentation, the Single Shot Detector (SSD) algorithm in conjunction with MobileNetV2 architecture was selected, showcasing its effectiveness in ensuring real-time processing capabilities.
Data Collection: I played a vital role in collecting relevant and diverse datasets for training and testing the face mask detection model. The data collection process was meticulous to ensure the system's generalizability and accuracy in various real-world scenarios.
Analysis and Interpretation: Being deeply involved in the analysis phase, I conducted thorough evaluations of the developed face mask detection model. The performance of different model architectures was compared, and MobileNetV2 emerged as the most optimal choice for CPU-based devices.
Implementation and System Integration: I contributed to the practical implementation of the system, seamlessly integrating deep learning techniques with OpenCV to enable real-time face mask detection on CPU-based devices. The system's ability to operate without the need for GPUs expands its applicability to edge devices, making it suitable for deployment in numerous public settings.
High Accuracy Achieved: Through our collaborative efforts, the face mask detection model demonstrated an impressive accuracy score of 0.97 on the testing data, indicating its effectiveness in monitoring face mask compliance with exceptional precision.
Public Safety Applications: The book chapter outlines the potential applications of the developed face mask detection system in diverse public settings, such as banking information systems, parks, schools, hotels, and hospitals. By aiding in the prevention of virus transmission during the ongoing pandemic and future outbreaks, this research study offers an efficient and accessible solution to enhance public safety.
Contribution to the Field: Our research contributes significantly to the field of computer vision, artificial intelligence, and public safety. By providing a comprehensive solution to monitor face mask usage in real-time, our work assists in mitigating the spread of infectious diseases, making a positive impact on society's well-being.
In conclusion, this book chapter represents our collective efforts in advancing the state-of-the-art in mask detection systems and underscores our commitment to using cutting-edge technology for the betterment of public health and safety. Its inclusion in the Scopus database reflects its significance as a valuable resource for researchers, practitioners, and policymakers alike.
In my capacity as an author, I have significantly contributed to a book chapter titled "Mitigating the Threat of Multi-Factor Authentication (MFA) Bypass through Man-in-the-Middle Attacks using EvilGinx2." This chapter sheds light on the vulnerabilities and risks associated with Multi-Factor Authentication (MFA) implementations, particularly in the context of man-in-the-middle (MITM) attacks facilitated by the sophisticated tool called EvilGinx2. The chapter's inclusion in the esteemed Scopus database highlights its relevance and importance in the field of cybersecurity and account protection. This research was presented at First National Symposium on Innovation and Challenges in Computing and Innovative Technologies for Sustainable Future (ICCIT-2023), which was hosted at British University, Vietnam.
My technical responsibilities included, but were not limited to:
Research Objective: Our primary goal was to thoroughly investigate the potential threats posed by MFA bypass techniques, specifically focusing on the utilization of the EvilGinx2 tool. We aimed to raise awareness about the risks associated with MFA and provide insights into potential countermeasures to enhance account security.
EvilGinx2 Analysis: As an author, I actively contributed to the analysis of EvilGinx2 as a powerful red team tool capable of intercepting login credentials and session cookies during MITM attacks. By exploiting its functionalities, attackers could circumvent MFA protections and gain unauthorized access to user accounts.
Cloning Legitimate Websites: The chapter extensively examines how EvilGinx2 leverages advanced techniques to clone legitimate websites, creating deceptive login portals that prompt users to provide their MFA codes or push prompts unknowingly.
Data Capture and Implications: Our research delves into the specific data that attackers can obtain using EvilGinx2, including sensitive information such as usernames, passwords, and authentication cookies. This thorough exploration underscores the severity of potential data breaches that could occur if MFA bypass techniques are successfully executed.
Risk Mitigation Strategies: I actively contributed to proposing potential risk mitigation strategies to safeguard against MFA bypass threats. These strategies aim to fortify MFA implementations and enhance overall account protection.
Significance for Cybersecurity Community: The book chapter's findings and insights hold considerable value for the cybersecurity community. By identifying the weaknesses in MFA systems and raising awareness about the potential exploitation of EvilGinx2, this research contributes to the ongoing efforts to secure online accounts and protect user data.
Importance of Scopus Indexing: The chapter's inclusion in the Scopus database highlights its recognition and credibility within the academic and research community. Its indexing in Scopus amplifies its reach and ensures that its critical findings are accessible to a broader audience of researchers, practitioners, and policymakers.
Conclusion: In conclusion, this book chapter represents a collective effort to understand and mitigate the threats posed by MFA bypass techniques through the lens of the powerful EvilGinx2 tool. By addressing the potential risks and proposing risk mitigation strategies, our research aims to bolster the security of MFA implementations and contribute to the advancement of cybersecurity practices in an ever-evolving digital landscape.
As a contributing author, my role in this book chapter titled "Reviewing User Interface Design & Usability in Information Systems" has been pivotal in advancing our knowledge of User Interface (UI) and Usability in the realm of Information Systems. The chapter is indexed in the esteemed Scopus database, underscoring its significance and relevance in the field of Human-Computer Interaction (HCI) and User Experience (UX) research. This research was presented at First National Symposium on Innovation and Challenges in Computing and Innovative Technologies for Sustainable Future (ICCIT-2023), which was hosted at British University, Vietnam.
My technical responsibilities included, but were not limited to:
Research Objective: Our primary objective was to make significant contributions to the understanding of UI design and usability in Information Systems. We sought to create an information system that not only adheres to UI principles but also undergoes rigorous usability testing, resulting in an efficient and effective system that fosters enhanced user satisfaction and exceptional user experiences.
Application of UI Principles: As an author, I actively participated in applying well-established UI principles during the design phase of the information system. Our collective efforts focused on creating intuitive and user-friendly interfaces, considering factors like visual aesthetics, information organization, and interaction patterns.
Usability Testing Methodologies: I played a vital role in conducting and analyzing usability testing sessions for the developed information system. These comprehensive testing methodologies allowed us to assess the system's usability, identify potential pain points, and gather valuable user feedback for iterative improvements.
Insights into Effective UI Design: The chapter delves into the outcomes of our research, presenting valuable insights into the most effective strategies for designing intuitive and user-friendly interfaces within the context of Information Systems. By exploring UI principles extensively, we aimed to provide actionable guidelines for creating visually appealing and functionally efficient interfaces.
Impact of Usability Testing: Our research extensively investigates the impact of various usability testing methods on the overall usability of the information system. By analyzing the results, we aimed to underscore the significance of usability testing in enhancing the user experience and overall system performance.
Emphasis on User Experience: The chapter emphasizes the importance of aligning Information Systems with users' needs and expectations. We acknowledge that delivering a seamless and engaging experience is vital for user satisfaction and long-term system success.
Practical Implications: By integrating UI principles and usability testing into the development process, we aimed to contribute practical knowledge that can be applied by researchers and practitioners in the field. Our insights into successful UI design and usability implementation can inform the creation of information systems that not only fulfill functional requirements but also deliver exceptional user experiences.
Benefits for the Field: The findings of this research contribute significantly to the field of HCI and UX, providing a comprehensive understanding of the significance of UI design and usability in the development of successful information systems. The knowledge presented in this chapter will equip researchers and practitioners with the tools to enhance UI design and usability in their projects, ultimately leading to improved user satisfaction and system performance.
Conclusion: In conclusion, this book chapter represents our collaborative efforts to advance the understanding of UI design and usability in Information Systems. By combining UI principles with robust usability testing, our research strives to elevate the quality of information systems and foster enhanced user experiences. The chapter's inclusion in the Scopus database highlights its recognition and potential impact on the HCI and UX research community.
As an author, I have made substantial contributions to the book chapter titled "Understanding the Impact and Implications of Emagnet and Pastebin in Cybersecurity." This chapter delves into the critical investigation of Emagnet and Pastebin, focusing on their profound influence on data breaches and password security in the realm of cybersecurity. Indexed in the prestigious Scopus database, the chapter showcases its significance and relevance in the field of information security and cyber threat analysis. This research was presented at First National Symposium on Innovation and Challenges in Computing and Innovative Technologies for Sustainable Future (ICCIT-2023), which was hosted at British University, Vietnam.
My technical responsibilities included, but were not limited to:
Research Objective: Our primary objective was to comprehensively analyze the role of Emagnet and Pastebin in cybersecurity incidents, particularly their impact on data breaches and the security of passwords. Through empirical studies and thorough investigations, we aimed to shed light on the effectiveness of these tools in facilitating cyberattacks and their evolution as preferred platforms for hackers.
Emagnet and Pastebin Analysis: As an author, I actively contributed to the comprehensive analysis of Emagnet and Pastebin as powerful tools employed by cyber attackers to extract email addresses and passwords from leaked databases. The chapter investigates the intricacies of these tools, including their ability to avoid detection and track outdated uploads.
Ethical and Legal Concerns: Our research delves into the ethical and legal implications of using Emagnet and Pastebin in cybersecurity attacks. We highlight the significant concerns related to user consent and privacy violations, underscoring the importance of responsible and ethical cybersecurity practices.
Responsibilities of Platforms: The chapter emphasizes the ethical and legal considerations surrounding hacking tools and the responsibilities of platforms in preventing data breaches. It addresses the need for platform owners to be proactive in safeguarding user data from unauthorized access and misuse.
Implementation Study: As part of our research, we conducted an implementation study to assess Emagnet's effectiveness in a controlled environment. The tool was tested on test accounts for a brute force attack, yielding successful results, further affirming its potential risks and implications.
Countermeasures and Best Practices: Our research explores effective countermeasures and best practices that individuals and organizations can adopt to bolster their cybersecurity defenses. We aim to equip readers with actionable insights to safeguard against cyber threats associated with Emagnet and Pastebin.
Advocacy for Proactive Cybersecurity: By presenting insights into cyber attackers' practices and vulnerabilities in the security landscape, our research advocates for proactive cybersecurity policies. It stresses the significance of increased user awareness and collaboration among stakeholders in combatting cyber threats effectively.
Collaborative Efforts for Cybersecurity: The chapter underscores the collective responsibility of stakeholders, including researchers, organizations, and policymakers, to enhance cybersecurity measures and protect sensitive information from unauthorized access. This collaborative approach is crucial in building resilient and secure digital environments.
Conclusion: In conclusion, this book chapter represents our collective efforts to understand the impact and implications of Emagnet and Pastebin in the domain of cybersecurity. The comprehensive analysis of these tools and their associated risks contributes to the advancement of knowledge in information security. Its inclusion in the Scopus database signifies its recognition as a valuable resource for researchers and practitioners seeking to address cyber threats effectively and promote a safer digital landscape.
As an author, I have significantly contributed to the book chapter on "Integrating State-of-the-Art Face Recognition and Anti-Spoofing Techniques into Enterprise Information Systems." The chapter explores the profound impact and implications of integrating cutting-edge face recognition and anti-spoofing technologies within Enterprise Information Systems.
My technical responsibilities included, but were not limited to:
Research Objective: Our primary goal was to comprehensively analyze the integration of advanced face recognition and anti-spoofing techniques within Enterprise Information Systems. Through rigorous empirical studies and in-depth investigations, our objective was to shed light on the effectiveness of these technologies in enhancing security measures and thwarting potential cyber threats.
Technological Integration Analysis: As an author, I actively contributed to the analysis of the integration process. This involved a detailed examination of cutting-edge Face Recognition Technology, particularly focusing on Convolutional Neural Networks (CNNs) as a core component for real-time face recognition within Enterprise Information Systems. Additionally, our research delved into the implementation of Landmark68 during the anti-spoofing phase, ensuring the system's ability to differentiate between genuine and counterfeit facial data.
Ethical and Legal Implications: Our research also explored the ethical and legal implications of deploying advanced face recognition systems. We meticulously investigated concerns related to user consent, privacy violations, and the responsible use of biometric data. Our emphasis was on establishing a framework that adheres to ethical standards and legal regulations.
Implementation and Validation: An integral part of our research involved the implementation of the developed solution in real-world scenarios. We conducted extensive validation tests, including simulations of potential spoofing attempts, to assess the system's accuracy and reliability. Our results indicated a remarkable accuracy level of 98.42%, showcasing the practicality and effectiveness of our integrated approach.
User Awareness and Collaboration: Recognizing the significance of user awareness, our research underscores the need for educating end-users about the capabilities and limitations of face recognition technology. Furthermore, we advocate for collaboration among stakeholders, including researchers, organizations, and policymakers, to establish a collective approach towards enhancing cybersecurity and data protection in digital environments.
Conclusion: In conclusion, our contribution to this book chapter represents a pivotal step towards advancing Enterprise Information Systems through the integration of state-of-the-art face recognition and anti-spoofing techniques. By addressing the critical challenges associated with security, privacy, and user awareness, our research significantly contributes to the ongoing efforts in creating secure, contactless, and technologically advanced digital ecosystems. Its recognition in esteemed academic platforms underscores its value as a comprehensive resource for researchers, practitioners, and policymakers aiming to create resilient and secure digital infrastructures for the future.
As an author, I have contributed significantly to the research paper titled "Debugging Human Pose Estimation with Explainable AI." This paper scrutinizes the prevalent challenges of false positives and unstable detections plaguing real-time object detection algorithms in human pose estimation. Our research delves into the intricacies of these issues, exploring their roots in complex scenarios like cluttered backgrounds, partial occlusions, low resolution, fast motion, poor image quality, and occlusions across frames. I presented this research work in ICISN International Research Conference 2024, hosted in Hanoi, Vietnam.
My technical responsibilities included, but were not limited to:
Research Objective: Our primary aim was to dissect and understand the underlying causes of inaccuracies in human pose estimation algorithms. By leveraging explainable AI techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME), our study sought to illuminate the opaque decision-making processes of these complex models. This comprehensive analysis aimed not only to pinpoint the factors leading to false positives and erratic detections but also to enhance the transparency and interpretability of these algorithms.
Explainable AI Techniques Implementation: My role in the research was pivotal in implementing and analyzing Grad-CAM and LIME to explain the algorithm's decisions. Through Grad-CAM, we visualized the critical regions within images that influenced the model's predictions, highlighting areas mistakenly deemed relevant. Similarly, LIME was utilized to demonstrate how minor perturbations in the input could lead to significant changes in output, thereby identifying specific instances where background elements erroneously impacted detection accuracy.
Error Analysis and Model Debugging: A significant part of our work involved conducting a thorough error analysis to identify and categorize the types of inaccuracies occurring within human pose estimation models. This process included detailed examinations of false positives, where non-human elements were incorrectly classified as humans, and instances of "blinking" - where detections were inconsistent across sequential frames. Our research provided a structured approach to debugging these errors, significantly improving the models' reliability and performance.
Contributions to Model Transparency and Reliability: Our research has made a considerable contribution towards enhancing the transparency and reliability of human pose estimation models. By applying explainable AI techniques, we have not only uncovered the reasons behind specific detection failures but also paved the way for future advancements in debugging and optimizing these algorithms. Our work emphasizes the importance of model interpretability in the development of more accurate and dependable AI systems.
Conclusion: The insights garnered from our study "Debugging Human Pose Estimation with Explainable AI" represent a vital step forward in addressing the complex challenges faced by real-time object detection algorithms. By focusing on explainability and error analysis, our research offers valuable tools and methodologies for data scientists aiming to refine and enhance the accuracy of human pose estimation models. This work underlines the critical role of explainable AI in developing transparent, interpretable, and reliable artificial intelligence systems for the future.
As an author, I have made significant contributions to the research paper titled "Immersive Virtual Painting: Pushing Boundaries in Real-Time Computer Vision using OpenCV with C++." This paper was presented at RICE 2023 International Research Conference in Hyderabad, India, and published in the Annals of Computer Science and Information Systems. The research introduces an advanced approach to virtual painting that leverages real-time computer vision technologies to transform interactive digital art.
My technical responsibilities encompassed a broad range of critical tasks:
Research Objective: Our primary goal was to revolutionize the field of virtual painting by developing a real-time color detection algorithm that identifies specific hues with high accuracy. By implementing this technology in C++ using OpenCV, we aimed to facilitate a seamless and immersive painting experience, effectively bridging the gap between human creativity and computer interaction.
Implementation of Color Detection and Rendering Algorithms: My role was pivotal in designing and optimizing the color detection algorithm which achieves up to 97.4% accuracy in recognizing colors from live video feeds. This functionality allows colors to be translated instantly into digital brush strokes on a virtual canvas, enhancing the fluidity of the artistic process. Additionally, I was instrumental in integrating these algorithms with rendering techniques to ensure real-time performance and low-latency interactions.
Performance Optimization and Comparative Analysis: A significant portion of our work focused on enhancing the speed and efficiency of our algorithms through parallel processing and advanced coding techniques. Our findings demonstrated that our implementation in C++ was 3-4 times faster than similar algorithms implemented in Python, highlighting the effectiveness of C++ in processing-intensive real-time computer vision tasks.
Contributions to Interactive Digital Art Platforms: Our research has significantly advanced the capabilities of interactive digital art platforms. By automating the color detection and rendering processes, we have transformed virtual painting from a static activity into a dynamic, co-creative experience that responds intuitively to the artist’s inputs. This breakthrough is not only a technical achievement but also enriches the way artists interact with digital mediums.
Conclusion: The insights and advancements presented in our study "Immersive Virtual Painting: Pushing Boundaries in Real-Time Computer Vision using OpenCV with C++" mark a significant milestone in the evolution of human-computer interaction within the arts. Our work underscores the transformative potential of integrating sophisticated computer vision technologies with interactive digital art, offering novel tools and methods that enhance both the creative process and the user experience. This research paves the way for future innovations in the field of real-time interactive systems.
As an author, I have significantly contributed to the research paper titled "MACCHIEF—Machine learning-based Algorithm Classification for Complaint Handling and Improved Efficiency in Firms." This study was presented at the 2023 Eighth International Conference on Research in Intelligent Computing in Engineering, held in Hyderabad, India, and published in the Annals of Computer Science and Information Systems. Our research focuses on enhancing consumer complaint management in information enterprises through the application of advanced machine learning algorithms.
My technical responsibilities in this project included:
Research Objective: Our primary aim was to develop and implement a machine learning-based system that automates the classification, analysis, and response to consumer complaints. By addressing the increasing volume of customer feedback across various channels, our project sought to streamline operations, improve complaint resolution efficiency, and enhance customer satisfaction.
Development and Implementation of Classification Models: I played a crucial role in the development of a novel classification model that utilizes machine learning algorithms like LGBMClassifier and LinearSVC, which achieved accuracies of 76.78% and 79.37% respectively. This part of the work involved not only the technical development of the models but also the optimization of these algorithms to handle large datasets efficiently and accurately.
Analysis and Performance Optimization: A significant portion of our research involved analyzing the performance of the implemented models to ensure they meet the operational demands of modern enterprises. This process included tuning the models to improve accuracy and testing them in real-world scenarios to ensure they can handle the variability and complexity of consumer complaints.
Future Prospects and Adaptability: Our study not only addresses current needs but also looks toward future enhancements. We explored the potential for integrating natural language processing (NLP) techniques to deepen sentiment analysis and adapt to evolving consumer preferences. This foresight aims to keep our solutions robust and adaptable in a rapidly changing business environment.
Conclusion: The insights from our study "MACCHIEF—Machine learning-based Algorithm Classification for Complaint Handling and Improved Efficiency in Firms" mark a substantial advance in the automation of complaint management processes. By leveraging machine learning, our work provides a scalable and efficient approach to handling and analyzing consumer feedback, thereby enhancing both customer satisfaction and competitive edge for enterprises. This research underscores the transformative potential of AI in optimizing business operations and shaping future customer service strategies.
As an author, I have contributed extensively to the research paper titled "DICKT—Deep Learning-Based Image Captioning using Keras and TensorFlow." This paper was presented at the 2023 Eighth International Conference on Research in Intelligent Computing in Engineering and published in the Annals of Computer Science and Information Systems. Our research examines the effectiveness of a deep learning model for generating captions for images, highlighting the nuances of automated caption generation in relation to human-like accuracy and linguistic quality.
My technical responsibilities in this project included:
Research Objective: Our primary goal was to develop and assess a deep learning-based caption generation model using TensorFlow and Keras. By employing these powerful tools, we aimed to automate the process of generating descriptive text for images that closely resembles human-generated captions, providing a quantitative measure of performance using the BLEU Score metric.
Model Development and Implementation: I played a critical role in designing and implementing the captioning model. This involved selecting appropriate deep learning architectures and training methods to handle the complexities of natural language processing and image understanding. The use of Keras and TensorFlow facilitated the development of a robust model capable of processing and analyzing visual data to generate coherent and contextually appropriate captions.
Performance Evaluation and Analysis: A significant part of our research focused on evaluating the model's performance using the BLEU Score metric, which measures the linguistic similarity between the machine-generated captions and a set of reference captions. Our analysis provided insights into the model's ability to mimic human-like captions and highlighted the limitations of BLEU Score in capturing the richness of human language.
Future Directions and Methodological Improvements: The study not only assessed current capabilities but also pointed towards future improvements in image captioning models. We discussed the potential for integrating more sophisticated natural language processing techniques to overcome the limitations observed with the BLEU Score, aiming for a more comprehensive evaluation of caption quality that goes beyond mere linguistic similarity.
Conclusion: The findings from our study "DICKT—Deep Learning-Based Image Captioning using Keras and TensorFlow" represent a significant advancement in the field of automated image captioning. By demonstrating the potential and limitations of current deep learning approaches, our research contributes to ongoing discussions about how best to evaluate and enhance the creativity and accuracy of machine-generated captions. This work emphasizes the need for broader metrics in assessing the quality of captions, advocating for a more holistic approach to understanding and improving the interface between artificial intelligence and human linguistic expression.
Many more publications in pipeline...
Award/Honor | By |
---|---|
Second Prize in 13th Student Research Conference | Vietnam National University |
Best Presentation Award in 13th Student Research Conference | Vietnam National University |
5 Best Student Award/Sinh Vien 5 Tot | Vietnam National University |
School Level Outstanding Youth | Vietnam National University - International School |
Ethical Hacking Certification | EH Academy |
Best Paper Award at iSummit 2021 | Vietnam National University |
Best Delegate Award at iSummit 2021 | Vietnam National University |
Reading Culture Ambassador | Vietnam National University |
NUS Enterprise Summer Programme | National University of Singapore |
Best Presentation Award | 5th International Science Forum 2021 |