Repository logo
 
Loading...
Profile Picture
Person

Brilhante, Maria de Fátima

Search Results

Now showing 1 - 3 of 3
  • BetaBoop function, BetaBoop random variables and extremal population growth
    Publication . Brilhante, Maria de Fátima; Pestana, Pedro Duarte; Lovric, Miodrag
    BetaBoop Function, BetaBoop Random Variables and Extremal Population Growth - Dictionary Entry
  • Measuring the risk of vulnerabilities exploitation
    Publication . Brilhante, Maria de Fátima; Pestana, Dinis; Pestana, Pedro Duarte; Rocha, Maria Luísa
    Modeling the vulnerabilities lifecycle and exploitation frequency are at the core of security of networks evaluation. Pareto, Weibull, and log-normal models have been widely used to model the exploit and patch availability dates, the time to compromise a system, the time between compromises, and the exploitation volumes. Random samples (systematic and simple random sampling) of the time from publication to update of cybervulnerabilities disclosed in 2021 and in 2022 are analyzed to evaluate the goodness-of-fit of the traditional Pareto and log-normal laws. As censoring and thinning almost surely occur, other heavy-tailed distributions in the domain of attraction of extreme value or geo-extreme value laws are investigated as suitable alternatives. Goodness-of-fit tests, the Akaike information criterion (AIC), and the Vuong test, support the statistical choice of log-logistic, a geomax stable law in the domain of attraction of the Fréchet model of maxima, with hyperexponential and general extreme value fittings as runners-up. Evidence that the data come from a mixture of differently stretched populations affects vulnerabilities scoring systems, specifically the common vulnerabilities scoring system (CVSS).
  • Economic impact of healthcare cyber risks
    Publication . Brilhante, Maria de Fátima; Mendonça, Sandra; Pestana, Pedro Duarte; Rocha, Maria Luísa; Santos, Rui
    Purpose: The healthcare sector is a primary target for cybercriminals, with health data breaches ranking among the most critical threats. Despite stringent penalties imposed by the U.S. Department of Health and Human Services Office for Civil Rights (OCR), vulnerabilities still persist due to slow detection and ineffective data protection measures. On the other hand, as organizations are often reluctant to disclose security breaches for fear of reputational and market share losses, penalties can serve as a useful proxy for quantifying losses and insurance claims. Methods: This study analyzes fines and settlements (2008–2024) using the traditional lognormal, general extreme value (GEV) and other heavy-tailed statistical models, including the geo-max-stable loglogistic law, and also the mixture models hyperexponential and hyperloglogistic. Results: Mixture models, either the hyperexponential or the hyperloglogistic, deliver the best fit for OCR penalties, and for yearly maxima, the best fit is achieved with the GEV distribution. Regarding Attorneys General fines, the hyperexponential model is optimal, with the GEV model excelling again for their yearly maxima. Hence, mixture models effectively capture the dual nature of penalty data, comprising clusters of moderate and extreme values. However, yearly maxima align better with the GEV model. Conclusions: The findings suggest that while Panjer’s theory for aggregate claims suffices for moderate claims, it must be supplemented with strategies to address extreme cybercrime scenarios, ensuring insurers and reinsurers can manage severe losses effectively.