Thursday, March 27, 2025

The High 10 Weblog Posts of 2024


Each January on the SEI Weblog, we current the 10-most visited posts of the earlier 12 months. This 12 months’s prime 10 record highlights the SEI’s work in software program acquisition, synthetic intelligence, giant language fashions, safe coding, insider threat mitigation, and enterprise threat administration. The posts, which had been revealed between January 1, 2024, and December 31, 2024, are offered beneath in reverse order based mostly on the variety of visits.

by Ipek Ozkaya and Brigid O’Hearn

The fiscal 12 months 2022 Nationwide Protection Authorization Act (NDAA) Part 835, “Unbiased Examine on Technical Debt in Software program-Intensive Methods,” required the Secretary of Protection to interact a federally funded analysis and improvement heart (FFRDC) “to review technical debt in software-intensive programs.” To fulfill this requirement and lead this work, the Division of Protection (DoD) chosen the Carnegie Mellon College (CMU) Software program Engineering Institute (SEI), which is a acknowledged chief within the observe of managing technical debt. Based on NDAA Part 835, the aim of the examine was to offer, amongst different issues, analyses and suggestions on quantitative measures for assessing technical debt, present and greatest practices for measuring and managing technical debt and its related prices, and practices for lowering technical debt.

Our workforce spent greater than a 12 months conducting the impartial examine. The report we produced describes the conduct of the examine, summarizes the technical traits noticed, and presents the ensuing suggestions. On this SEI Weblog submit, we summarize a number of suggestions that apply to the DoD and different improvement organizations searching for to investigate, handle, and scale back technical debt. You will discover a whole dialogue of the examine methodology, findings, and suggestions within the SEI’s Report back to the Congressional Protection Committees on Nationwide Protection Authorization Act (NDAA) for Fiscal Yr 2022 Part 835 Unbiased Examine on Technical Debt in Software program-Intensive Methods.

Learn the submit in its entirety.

by Douglas Schmidt and John E. Robert

There may be appreciable curiosity in utilizing generative AI instruments, resembling giant language fashions (LLMs), to revolutionize industries and create new alternatives within the business and authorities domains. For a lot of Division of Protection (DoD) software program acquisition professionals, the promise of LLMs is interesting, however there’s additionally a deep-seated concern that LLMs don’t deal with in the present day’s challenges attributable to privateness issues, potential for inaccuracy within the output, and insecurity or uncertainty about methods to use LLMs successfully and responsibly. This weblog submit is the second in a collection devoted to exploring how generative AI, notably LLMs resembling ChatGPT, Claude, and Gemini, might be utilized inside the DoD to boost software program acquisition actions.

Our first weblog submit on this collection offered 10 Advantages and 10 Challenges of Making use of LLMs to DoD Software program Acquisition and urged particular use circumstances the place generative AI can present worth to software program acquisition actions. This second weblog submit expands on that dialogue by displaying particular examples of utilizing LLMs for software program acquisition within the context of a doc summarization experiment, in addition to codifying the teachings we discovered from this experiment and our associated work on making use of generative AI to software program engineering.

Learn the submit in its entirety.

by Robin Ruefle

Incident response is a vital want all through authorities and business as cyber menace actors look to compromise vital belongings inside organizations with cascading, typically catastrophic, results. In 2021, for instance, a hacker allegedly accessed a Florida water remedy plant’s laptop programs and poisoned the water provide. Throughout the U.S. vital nationwide infrastructure, 77 p.c of organizations have seen an increase in insider-driven cyber threats during the last three years. The 2023 IBM Value of a Knowledge Breach report highlights the essential function of getting a well-tested incident response plan. Corporations with no examined plan in place will face 82 p.c larger prices within the occasion of a cyber assault, in contrast to those who have carried out and examined such a plan.

Researchers within the SEI CERT Division compiled 10 classes discovered from our greater than 35 years of growing and dealing with incident response and safety groups all through the globe. These classes are related to incident response groups contending with an ever-evolving cyber menace panorama. In honor of the CERT Division (additionally referred to the CERT Coordination Middle in our work with the Discussion board of Incident Response and Safety Groups) celebrating 35 years of operation, on this weblog submit we have a look again at among the classes discovered from our Cyber Safety Incident Response Crew (CSIRT) capability constructing experiences that additionally apply to different areas of safety operations.

Learn the submit in its entirety.

by Roger Black

Based on a 2023 Ponemon examine, the variety of reported insider threat incidents and the prices related to them continues to rise. With greater than 7,000 reported circumstances in 2023, the common insider threat incident value organizations over $600,000. To assist organizations assess their insider threat applications and determine potential vulnerabilities that would lead to insider threats, the SEI CERT Division has launched two instruments obtainable for obtain on its web site. Beforehand obtainable solely to licensed companions, the Insider Menace Vulnerability Evaluation (ITVA) and Insider Menace Program Analysis (ITPE) toolkits present sensible strategies to evaluate your group’s skill to handle insider threat. This submit describes the aim and use of the toolkits, with a give attention to the workbook parts of the toolkits which can be the first strategies of program evaluation.

Learn the submit in its entirety.

by David Svoboda

In current weeks a number of vulnerabilities have rocked the Rust group, inflicting many to query the security of the borrow checker, or of Rust normally. On this submit, we look at two such vulnerabilities: the primary is CVE-2024-3094, which includes some malicious information within the xz library, and the second is CVE-2024-24576, which includes command-injection vulnerabilities in Home windows. How did these vulnerabilities come up, how had been they found, and the way do they contain Rust? Extra importantly, may Rust be prone to extra comparable vulnerabilities sooner or later?

Final 12 months we revealed two weblog posts in regards to the safety supplied by the Rust programming language. We mentioned the reminiscence security and concurrency security supplied by Rust’s borrow checker. We additionally described among the limitations of Rust’s safety mannequin, resembling its restricted skill to stop numerous injection assaults, and the unsafe key phrase, which permits builders to bypass Rust’s safety mannequin when crucial. Again then, our conclusion was that no language may very well be absolutely safe, but the borrow checker did present important, albeit restricted, reminiscence and concurrency security when not bypassed with the unsafe key phrase. We additionally examined Rust by the lens of supply and binary evaluation, gauged its stability and maturity, and realized that the constraints and expectations for language maturity have slowly advanced over the many years. Rust is shifting within the path of maturity in the present day, which is distinct from what was thought-about a mature programming language in 1980. Moreover, Rust has made some notable stability ensures, resembling promising to deprecate relatively than delete any crates in crates.io to keep away from repeating the Leftpad fiasco.

Learn the submit in its entirety.

by Ipek Ozkaya, Douglas Schmidt, and Michael Hilton

The preliminary surge of pleasure and concern surrounding generative synthetic intelligence (AI) is step by step evolving right into a extra practical perspective. Whereas the jury continues to be out on the precise return on funding and tangible enhancements from generative AI, the speedy tempo of change is difficult software program engineering training and curricula. Educators have needed to adapt to the continuing developments in generative AI to offer a practical perspective to their college students, balancing consciousness, wholesome skepticism, and curiosity.

In a current SEI webcast, researchers mentioned the impression of generative AI on software program engineering training. SEI and Carnegie Mellon College specialists spoke about using generative AI within the curriculum and the classroom, mentioned how college and college students can most successfully use generative AI, and thought of issues about ethics and fairness when utilizing these instruments. The panelists took questions from the viewers and drew on their expertise as educators to talk to the vital questions generative AI raises for software program engineering training.

This weblog submit options an edited transcript of responses from the unique webcast. Some questions and solutions have been rearranged and revised for readability.

Learn the submit in its entirety.

by Jeff Gennari, Shing-hon Lau, and Samuel J. Perl

Massive language fashions (LLMs) have proven a exceptional skill to ingest, synthesize, and summarize data whereas concurrently demonstrating important limitations in finishing real-world duties. One notable area that presents each alternatives and dangers for leveraging LLMs is cybersecurity. LLMs might empower cybersecurity specialists to be extra environment friendly or efficient at stopping and stopping assaults. Nonetheless, adversaries might additionally use generative synthetic intelligence (AI) applied sciences in form. Now we have already seen proof of actors utilizing LLMs to assist in cyber intrusion actions (e.g., WormGPT, FraudGPT, and many others.). Such misuse raises many vital cybersecurity-capability-related questions together with

  • Can an LLM like GPT-4 write novel malware?
  • Will LLMs grow to be vital parts of large-scale cyber-attacks?
  • Can we belief LLMs to offer cybersecurity specialists with dependable data?

The reply to those questions will depend on the analytic strategies chosen and the outcomes they supply. Sadly, present strategies and methods for evaluating the cybersecurity capabilities of LLMs usually are not complete. Not too long ago, a workforce of researchers within the SEI CERT Division labored with OpenAI to develop higher approaches for evaluating LLM cybersecurity capabilities. This SEI Weblog submit, excerpted from a just lately revealed paper that we coauthored with OpenAI researchers Joel Parish and Girish Sastry, summarizes 14 suggestions to assist assessors precisely consider LLM cybersecurity capabilities.

Learn the submit in its entirety.

by John E. Robert and Douglas Schmidt

Division of Protection (DoD) software program acquisition has lengthy been a fancy and document-heavy course of. Traditionally, many software program acquisition actions, resembling producing Requests for Info (RFIs), summarizing authorities laws, figuring out related business requirements, and drafting challenge standing updates, have required appreciable human-intensive effort. Nonetheless, the appearance of generative synthetic intelligence (AI) instruments, together with giant language fashions (LLMs), provides a promising alternative to speed up and streamline sure features of the software program acquisition course of.

Software program acquisition is considered one of many advanced mission-critical domains which will profit from making use of generative AI to reinforce and/or speed up human efforts. This weblog submit is the primary in a collection devoted to exploring how generative AI, notably LLMs like ChatGPT-4, can improve software program acquisition actions. On this submit we current 10 advantages and 10 challenges of making use of LLMs to the software program acquisition course of and counsel particular use circumstances the place generative AI can present worth. Our focus is on offering well timed data to software program acquisition professionals, together with protection software program builders, program managers, programs engineers, cybersecurity analysts, and different key stakeholders, who function inside difficult constraints and prioritize safety and accuracy.

Learn the submit in its entirety.

by Mark Sherman

The typical code pattern comprises 6,000 defects per million strains of code, and the SEI’s analysis has discovered that 5 p.c of those defects grow to be vulnerabilities. This interprets to roughly 3 vulnerabilities per 10,000 strains of code. Can ChatGPT assist enhance this ratio? There was a lot hypothesis about how instruments constructed on prime of enormous language fashions (LLMs) may impression software program improvement, extra particularly, how they may change the best way builders write code and consider it.

In March 2023 a workforce of CERT Safe Coding researchers—the workforce included Robert Schiela, David Svoboda, and myself—used ChatGPT 3.5 to look at the noncompliant software program code examples in our CERT Safe Coding commonplace, particularly the SEI CERT C Coding Commonplace. On this submit, I current our experiment and findings, which present that whereas ChatGPT 3.5 has promise, there are clear limitations.

Learn the submit in its entirety.

by Greg Touhill

The function of the chief data safety officer (CISO) has by no means been extra vital to organizational success. The current and near-future for CISOs will probably be marked by breathtaking technical advances, notably these related to the inclusion of synthetic intelligence applied sciences being built-in into enterprise capabilities, in addition to emergent authorized and regulatory challenges. Continued advances in generative synthetic intelligence (AI) will speed up the proliferation of deepfakes designed to erode public belief in on-line data and public establishments. Moreover, these challenges will probably be amplified by an unstable international theater during which nefarious actors and nation states chase alternatives to take advantage of any potential organizational weak point. Some forecasts have already characterised 2024 as a strain cooker surroundings for CISOs. In such an surroundings, abilities are vital. On this submit I define the highest 10 abilities that CISOs want for 2024 and past. These suggestions draw upon my expertise because the director of the SEI’s CERT Division, in addition to my service as the primary federal chief data safety officer of america, main cyber operations on the U.S. Division of Homeland Safety, and my prolonged army service as a communications and our on-line world operations officer.

Learn the submit in its entirety.

Wanting Forward in 2025

We publish a brand new submit on the SEI Weblog weekly. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, machine studying, cybersecurity, software program engineering, and extra.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles