Initialize infrastructure maintenance scripts with Ansible playbooks
Add Ansible-based maintenance scripts for infrastructure operations: - CVE scanner using NIST NVD database - Package update checker with OpenAI risk assessment - Docker cleanup playbook - Log archiver for rotated logs - Disk space analyzer Supports Ubuntu 20.04/22.04/24.04, Debian 11/12/13, and Alpine Linux
This commit is contained in:
commit
3574b47a5f
13
.gitignore
vendored
Normal file
13
.gitignore
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
*.retry
|
||||
/tmp/
|
||||
*.pyc
|
||||
__pycache__/
|
||||
.ansible/
|
||||
*.log
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.DS_Store
|
||||
.env
|
||||
secrets.yml
|
||||
vault.yml
|
||||
248
README.md
Normal file
248
README.md
Normal file
@ -0,0 +1,248 @@
|
||||
# Infrastructure Maintenance Scripts
|
||||
|
||||
Ansible-based maintenance scripts for infrastructure operations across multiple Linux distributions.
|
||||
|
||||
## Supported Operating Systems
|
||||
|
||||
- Ubuntu 20.04, 22.04, 24.04
|
||||
- Debian 11, 12, 13
|
||||
- Alpine Linux
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ansible 2.15 or higher
|
||||
- Python 3.8+ on target hosts
|
||||
- SSH access to target hosts
|
||||
- Sudo privileges on target hosts
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone git@git.puddi.ng:public-infra/maintenance-scripts.git
|
||||
cd maintenance-scripts
|
||||
```
|
||||
|
||||
2. Install required Ansible collections:
|
||||
```bash
|
||||
ansible-galaxy collection install -r requirements.yml
|
||||
```
|
||||
|
||||
3. Configure inventory:
|
||||
```bash
|
||||
vim inventory/hosts.ini
|
||||
```
|
||||
|
||||
## Available Playbooks
|
||||
|
||||
### 1. CVE Scanner - `playbooks/scan_cves.yml`
|
||||
|
||||
Identifies packages with CVE vulnerabilities using the NIST NVD database.
|
||||
|
||||
**Features:**
|
||||
- Parses installed packages across supported OS distributions
|
||||
- Queries NIST NVD CVE database via API
|
||||
- Correlates vulnerabilities with installed packages
|
||||
- Outputs JSON report with findings
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
ansible-playbook playbooks/scan_cves.yml -i inventory/hosts.ini
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- JSON report saved to `/tmp/cve_report_*.json`
|
||||
- Contains package name, version, CVE IDs, severity, and host information
|
||||
|
||||
### 2. Package Update Checker - `playbooks/check_updates.yml`
|
||||
|
||||
Checks for available package updates and assesses potential risks using OpenAI.
|
||||
|
||||
**Features:**
|
||||
- Lists upgradable packages across supported distributions
|
||||
- Uses OpenAI API to identify potential breaking changes
|
||||
- Separates safe updates from risky ones
|
||||
- Provides recommendation on whether to proceed
|
||||
|
||||
**Prerequisites:**
|
||||
- Set `OPENAI_API_KEY` environment variable
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
ansible-playbook playbooks/check_updates.yml -i inventory/hosts.ini
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- JSON report saved to `/tmp/update_report_*.json`
|
||||
- Lists safe and risky updates with risk assessment
|
||||
- Provides boolean flag for automatic update safety
|
||||
|
||||
### 3. Docker Cleanup - `playbooks/cleanup_docker.yml`
|
||||
|
||||
Cleans up Docker resources including images, containers, and build cache.
|
||||
|
||||
**Features:**
|
||||
- Removes dangling images
|
||||
- Removes stopped containers
|
||||
- Cleans build cache
|
||||
- Provides before/after disk usage comparison
|
||||
- Optional volume cleanup (disabled by default)
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
ansible-playbook playbooks/cleanup_docker.yml -i inventory/hosts.ini
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- JSON report saved to `/tmp/docker_cleanup_report_*.json`
|
||||
- Shows disk space reclaimed for each resource type
|
||||
|
||||
### 4. Log Archiver - `playbooks/archive_logs.yml`
|
||||
|
||||
Archives rotated log files and transfers them to remote storage.
|
||||
|
||||
**Features:**
|
||||
- Archives gzipped rotated logs from `/var/log`
|
||||
- Organizes logs by hostname, IP, and date
|
||||
- Transfers archives to remote storage location
|
||||
- Cleans up original logs after successful transfer
|
||||
- Generates metadata for each archive
|
||||
|
||||
**Prerequisites:**
|
||||
- Set `REMOTE_STORAGE_PATH` environment variable (defaults to `/mnt/log-archive`)
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
export REMOTE_STORAGE_PATH="/path/to/log-storage"
|
||||
ansible-playbook playbooks/archive_logs.yml -i inventory/hosts.ini
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- JSON report saved to `/tmp/log_archive_report_*.json`
|
||||
- Archives stored with structure: `YEAR/MONTH/DAY/logs_HOSTNAME_IP_DATE.tar.gz`
|
||||
|
||||
### 5. Disk Space Analyzer - `playbooks/analyze_disk_space.yml`
|
||||
|
||||
Analyzes disk usage and identifies directories consuming excessive space.
|
||||
|
||||
**Features:**
|
||||
- Scans multiple paths with configurable depth (default 5)
|
||||
- Identifies directories larger than threshold (default 1GB)
|
||||
- Lists large files exceeding threshold
|
||||
- Provides disk and inode usage statistics
|
||||
- Alerts on high disk or inode usage
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
ansible-playbook playbooks/analyze_disk_space.yml -i inventory/hosts.ini
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- JSON report saved to `/tmp/disk_space_report_*.json`
|
||||
- Lists large directories and files sorted by size
|
||||
- Includes disk and inode usage alerts
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `OPENAI_API_KEY`: Required for package update risk assessment
|
||||
- `REMOTE_STORAGE_PATH`: Path for log archive storage (default: `/mnt/log-archive`)
|
||||
|
||||
### Inventory Structure
|
||||
|
||||
The inventory file uses INI format with groups:
|
||||
|
||||
```ini
|
||||
[webservers]
|
||||
web1.example.com ansible_host=192.168.1.10
|
||||
|
||||
[dbservers]
|
||||
db1.example.com ansible_host=192.168.1.20
|
||||
```
|
||||
|
||||
### SSH Configuration
|
||||
|
||||
Configure SSH access in `ansible.cfg` or use SSH config file:
|
||||
```ini
|
||||
[defaults]
|
||||
host_key_checking = False
|
||||
```
|
||||
|
||||
## Running Playbooks
|
||||
|
||||
### Target specific hosts:
|
||||
```bash
|
||||
ansible-playbook playbooks/scan_cves.yml -i inventory/hosts.ini -l web1.example.com
|
||||
```
|
||||
|
||||
### Target groups:
|
||||
```bash
|
||||
ansible-playbook playbooks/scan_cves.yml -i inventory/hosts.ini -l webservers
|
||||
```
|
||||
|
||||
### Run with extra variables:
|
||||
```bash
|
||||
ansible-playbook playbooks/analyze_disk_space.yml -i inventory/hosts.ini -e "size_threshold_gb=5"
|
||||
```
|
||||
|
||||
### Limit concurrency:
|
||||
```bash
|
||||
ansible-playbook playbooks/scan_cves.yml -i inventory/hosts.ini -f 10
|
||||
```
|
||||
|
||||
## Output Locations
|
||||
|
||||
All playbooks generate JSON reports in `/tmp/` with timestamps:
|
||||
- CVE reports: `/tmp/cve_report_TIMESTAMP.json`
|
||||
- Update reports: `/tmp/update_report_TIMESTAMP.json`
|
||||
- Docker cleanup reports: `/tmp/docker_cleanup_report_TIMESTAMP.json`
|
||||
- Log archive reports: `/tmp/log_archive_report_TIMESTAMP.json`
|
||||
- Disk space reports: `/tmp/disk_space_report_TIMESTAMP.json`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Test on non-production hosts first**: Always test playbooks on a subset of hosts
|
||||
2. **Monitor output**: Review reports before taking automated actions
|
||||
3. **Schedule regular runs**: Use cron or Jenkins for periodic scans
|
||||
4. **Backup before updates**: Ensure backups exist before running update playbooks
|
||||
5. **Review risky updates**: Manually review packages marked as risky before updating
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection issues
|
||||
```bash
|
||||
ansible all -i inventory/hosts.ini -m ping
|
||||
```
|
||||
|
||||
### Privilege issues
|
||||
Ensure the user has sudo privileges:
|
||||
```bash
|
||||
ansible-playbook playbooks/scan_cves.yml -i inventory/hosts.ini -u ansible_user --become
|
||||
```
|
||||
|
||||
### Collection not found
|
||||
Install required collections:
|
||||
```bash
|
||||
ansible-galaxy collection install -r requirements.yml
|
||||
```
|
||||
|
||||
### Python module issues
|
||||
Ensure Python 3 is available:
|
||||
```bash
|
||||
ansible all -i inventory/hosts.ini -m shell -a "python3 --version"
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Follow Ansible best practices
|
||||
2. Use Ansible modules instead of shell commands when possible
|
||||
3. Ensure cross-platform compatibility
|
||||
4. Write clear and descriptive task names
|
||||
5. Add error handling where appropriate
|
||||
6. Test on all supported OS distributions
|
||||
|
||||
## License
|
||||
|
||||
Copyright (c) 2026. All rights reserved.
|
||||
25
ansible.cfg
Normal file
25
ansible.cfg
Normal file
@ -0,0 +1,25 @@
|
||||
[defaults]
|
||||
inventory = inventory/hosts.ini
|
||||
roles_path = roles
|
||||
collections_path = collections
|
||||
retry_files_enabled = False
|
||||
host_key_checking = False
|
||||
stdout_callback = yaml
|
||||
bin_ansible_callbacks = True
|
||||
display_skipped_hosts = False
|
||||
timeout = 30
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_facts
|
||||
fact_caching_timeout = 86400
|
||||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
|
||||
|
||||
[privilege_escalation]
|
||||
become = True
|
||||
become_method = sudo
|
||||
become_user = root
|
||||
become_ask_pass = False
|
||||
24
inventory/hosts.ini
Normal file
24
inventory/hosts.ini
Normal file
@ -0,0 +1,24 @@
|
||||
[all:vars]
|
||||
ansible_python_interpreter=/usr/bin/python3
|
||||
|
||||
[webservers]
|
||||
# webserver1.example.com ansible_host=192.168.1.10
|
||||
# webserver2.example.com ansible_host=192.168.1.11
|
||||
|
||||
[dbservers]
|
||||
# dbserver1.example.com ansible_host=192.168.1.20
|
||||
# dbserver2.example.com ansible_host=192.168.1.21
|
||||
|
||||
[appservers]
|
||||
# appserver1.example.com ansible_host=192.168.1.30
|
||||
# appserver2.example.com ansible_host=192.168.1.31
|
||||
|
||||
[dockerservers]
|
||||
# docker1.example.com ansible_host=192.168.1.40
|
||||
# docker2.example.com ansible_host=192.168.1.41
|
||||
|
||||
[all:children]
|
||||
webservers
|
||||
dbservers
|
||||
appservers
|
||||
dockerservers
|
||||
182
playbooks/analyze_disk_space.yml
Normal file
182
playbooks/analyze_disk_space.yml
Normal file
@ -0,0 +1,182 @@
|
||||
---
|
||||
- name: Analyze Disk Space and Identify Large Directories
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
vars:
|
||||
scan_paths:
|
||||
- "/"
|
||||
- "/var"
|
||||
- "/home"
|
||||
- "/opt"
|
||||
- "/usr"
|
||||
- "/tmp"
|
||||
max_depth: 5
|
||||
size_threshold_gb: 1
|
||||
output_file: "/tmp/disk_space_report_{{ ansible_date_time.iso8601_basic_short }}.json"
|
||||
|
||||
tasks:
|
||||
- name: Get overall disk usage
|
||||
shell: df -h
|
||||
register: df_output
|
||||
changed_when: false
|
||||
|
||||
- name: Parse disk usage information
|
||||
set_fact:
|
||||
disk_usage: >-
|
||||
{{ df_output.stdout_lines[1:] |
|
||||
map('regex_replace', '^([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)$', '{"device": "\\1", "size": "\\2", "used": "\\3", "available": "\\4", "percent": "\\5", "mount": "\\6"}') |
|
||||
map('from_json') |
|
||||
list }}
|
||||
|
||||
- name: Find directories exceeding size threshold
|
||||
find:
|
||||
paths: "{{ item }}"
|
||||
file_type: directory
|
||||
recurse: false
|
||||
register: dir_list
|
||||
loop: "{{ scan_paths }}"
|
||||
failed_when: false
|
||||
|
||||
- name: Analyze directory sizes for top-level paths
|
||||
shell: >-
|
||||
du -h -d{{ max_depth }} {{ item }} 2>/dev/null | grep -E '^[0-9]+\.?[0-9]*G' | awk '{print $1 "\t" $2}' | sort -hr
|
||||
register: dir_sizes
|
||||
loop: "{{ scan_paths }}"
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Parse directory size results
|
||||
set_fact:
|
||||
large_directories: >-
|
||||
{{ large_directories | default([]) +
|
||||
dir_sizes.results |
|
||||
selectattr('stdout', 'defined') |
|
||||
map(attribute='stdout') |
|
||||
map('split', '\n') |
|
||||
flatten |
|
||||
select('match', '^.+\t.+$') |
|
||||
map('regex_replace', '^([0-9]+\.?[0-9]*G)\t(.+)$', '{"size_human": "\\1", "size_gb": "\\1", "path": "\\2"}') |
|
||||
map('from_json') |
|
||||
map('combine', {'size_gb_num': (item.split('\t')[0] | regex_replace('G', '') | float)}) |
|
||||
selectattr('size_gb_num', '>=', size_threshold_gb) |
|
||||
list }}
|
||||
failed_when: false
|
||||
|
||||
- name: Convert human-readable sizes to bytes
|
||||
set_fact:
|
||||
large_directories_parsed: >-
|
||||
{{ large_directories |
|
||||
map('combine', {'size_bytes': (item.size_gb_num | float * 1024 * 1024 * 1024 | int)}) |
|
||||
list }}
|
||||
|
||||
- name: Find files larger than threshold
|
||||
find:
|
||||
paths: "{{ item }}"
|
||||
size: "{{ (size_threshold_gb * 1024 * 1024 * 1024) | int }}"
|
||||
recurse: true
|
||||
register: large_files
|
||||
loop: "{{ scan_paths }}"
|
||||
failed_when: false
|
||||
|
||||
- name: Parse large file information
|
||||
set_fact:
|
||||
large_files_info: >-
|
||||
{{ large_files_info | default([]) +
|
||||
large_files.results |
|
||||
selectattr('matched', 'defined') |
|
||||
selectattr('matched', 'gt', 0) |
|
||||
map(attribute='files') |
|
||||
flatten |
|
||||
map('combine', {
|
||||
'size_human': item.size | default(0) | human_readable,
|
||||
'path': item.path
|
||||
}) |
|
||||
list }}
|
||||
loop: "{{ large_files.results | default([]) }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
failed_when: false
|
||||
|
||||
- name: Get inode usage
|
||||
shell: df -i
|
||||
register: df_inode_output
|
||||
changed_when: false
|
||||
|
||||
- name: Parse inode usage information
|
||||
set_fact:
|
||||
inode_usage: >-
|
||||
{{ df_inode_output.stdout_lines[1:] |
|
||||
map('regex_replace', '^([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)\s+([^\s]+)$', '{"device": "\\1", "inodes_total": "\\2", "inodes_used": "\\3", "inodes_free": "\\4", "inodes_percent": "\\5", "mount": "\\6"}') |
|
||||
map('from_json') |
|
||||
map('combine', {'inodes_percent_num': (item.inodes_percent | regex_replace('%', '') | int)}) |
|
||||
list }}
|
||||
|
||||
- name: Generate disk space report
|
||||
copy:
|
||||
dest: "{{ output_file }}"
|
||||
content: >-
|
||||
{
|
||||
"hostname": "{{ ansible_hostname }}",
|
||||
"ip_address": "{{ ansible_default_ipv4.address }}",
|
||||
"os": "{{ ansible_distribution }} {{ ansible_distribution_version }}",
|
||||
"analysis_date": "{{ ansible_date_time.iso8601 }}",
|
||||
"disk_usage": {{ disk_usage | to_json }},
|
||||
"inode_usage": {{ inode_usage | to_json }},
|
||||
"scan_parameters": {
|
||||
"paths": {{ scan_paths | to_json }},
|
||||
"max_depth": {{ max_depth }},
|
||||
"size_threshold_gb": {{ size_threshold_gb }},
|
||||
"size_threshold_bytes": {{ (size_threshold_gb * 1024 * 1024 * 1024) | int }}
|
||||
},
|
||||
"large_directories": {
|
||||
"count": {{ large_directories_parsed | default([]) | length }},
|
||||
"threshold_gb": {{ size_threshold_gb }},
|
||||
"directories": {{ large_directories_parsed | default([]) | to_json }}
|
||||
},
|
||||
"large_files": {
|
||||
"count": {{ large_files_info | default([]) | length }},
|
||||
"threshold_gb": {{ size_threshold_gb }},
|
||||
"files": {{ large_files_info | default([]) | to_json }}
|
||||
},
|
||||
"summary": {
|
||||
"total_large_directories": {{ large_directories_parsed | default([]) | length }},
|
||||
"total_large_files": {{ large_files_info | default([]) | length }},
|
||||
"disk_alerts": {{ disk_usage | selectattr('percent', 'search', '^[89][0-9]%|^100%$') | length > 0 }},
|
||||
"inode_alerts": {{ inode_usage | selectattr('inodes_percent_num', 'gte', 90) | length > 0 }}
|
||||
}
|
||||
}
|
||||
mode: '0600'
|
||||
|
||||
- name: Display disk space summary
|
||||
debug:
|
||||
msg:
|
||||
- "Disk space analysis completed on {{ ansible_hostname }}"
|
||||
- "Large directories found: {{ large_directories_parsed | default([]) | length }}"
|
||||
- "Large files found: {{ large_files_info | default([]) | length }}"
|
||||
- "Disk usage alerts: {{ disk_usage | selectattr('percent', 'search', '^[89][0-9]%|^100%$') | length > 0 }}"
|
||||
- "Inode usage alerts: {{ inode_usage | selectattr('inodes_percent_num', 'gte', 90) | length > 0 }}"
|
||||
- "Report saved to: {{ output_file }}"
|
||||
|
||||
- name: Display top 5 largest directories
|
||||
debug:
|
||||
msg: "{{ item.size_human }}\t{{ item.path }}"
|
||||
loop: "{{ large_directories_parsed | default([]) | sort(attribute='size_gb_num', reverse=true) | first(5) }}"
|
||||
when: large_directories_parsed | default([]) | length > 0
|
||||
|
||||
- name: Return disk space findings
|
||||
set_fact:
|
||||
disk_space_report:
|
||||
hostname: ansible_hostname
|
||||
ip_address: ansible_default_ipv4.address
|
||||
os: ansible_distribution + ' ' + ansible_distribution_version
|
||||
disk_usage: disk_usage
|
||||
inode_usage: inode_usage
|
||||
large_directories: large_directories_parsed | default([])
|
||||
large_files: large_files_info | default([])
|
||||
summary:
|
||||
total_large_directories: large_directories_parsed | default([]) | length
|
||||
total_large_files: large_files_info | default([]) | length
|
||||
disk_alerts: disk_usage | selectattr('percent', 'search', '^[89][0-9]%|^100%$') | length > 0
|
||||
inode_alerts: inode_usage | selectattr('inodes_percent_num', 'gte', 90) | length > 0
|
||||
analysis_date: ansible_date_time.iso8601
|
||||
report_file: output_file
|
||||
176
playbooks/archive_logs.yml
Normal file
176
playbooks/archive_logs.yml
Normal file
@ -0,0 +1,176 @@
|
||||
---
|
||||
- name: Archive and Send Rotated Logs
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
vars:
|
||||
log_directory: "/var/log"
|
||||
archive_pattern: "*.gz"
|
||||
remote_storage_path: "{{ lookup('env', 'REMOTE_STORAGE_PATH') | default('/mnt/log-archive', true) }}"
|
||||
temp_archive_dir: "/tmp/log_archive_{{ ansible_date_time.iso8601_basic_short }}"
|
||||
local_temp_dir: "/tmp/received_logs_{{ ansible_date_time.iso8601_basic_short }}"
|
||||
retention_days: 30
|
||||
archive_filename: "logs_{{ ansible_hostname }}_{{ ansible_default_ipv4.address | replace('.', '-') }}_{{ ansible_date_time.date }}.tar.gz"
|
||||
output_file: "/tmp/log_archive_report_{{ ansible_date_time.iso8601_basic_short }}.json"
|
||||
|
||||
tasks:
|
||||
- name: Create temporary local directory for logs
|
||||
file:
|
||||
path: "{{ local_temp_dir }}"
|
||||
state: directory
|
||||
mode: '0700'
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Find rotated log files (gzipped)
|
||||
find:
|
||||
paths: "{{ log_directory }}"
|
||||
patterns: "{{ archive_pattern }}"
|
||||
recurse: true
|
||||
register: rotated_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Check if rotated logs exist
|
||||
fail:
|
||||
msg: "No rotated log files found matching {{ archive_pattern }} in {{ log_directory }}"
|
||||
when: rotated_logs.matched == 0
|
||||
|
||||
- name: Display found log files
|
||||
debug:
|
||||
msg: "Found {{ rotated_logs.matched }} rotated log files to archive"
|
||||
|
||||
- name: Create temporary archive directory
|
||||
file:
|
||||
path: "{{ temp_archive_dir }}"
|
||||
state: directory
|
||||
mode: '0700'
|
||||
|
||||
- name: Organize logs in temporary directory with metadata
|
||||
shell: >-
|
||||
mkdir -p "{{ temp_archive_dir }}/{{ ansible_hostname }}/{{ ansible_date_time.date }}/{{ ansible_default_ipv4.address | replace('.', '-') }}/{{ item.path | dirname | replace(log_directory, '') }}" &&
|
||||
cp -p {{ item.path }} "{{ temp_archive_dir }}/{{ ansible_hostname }}/{{ ansible_date_time.date }}/{{ ansible_default_ipv4.address | replace('.', '-') }}/{{ item.path | dirname | replace(log_directory, '') }}/"
|
||||
loop: "{{ rotated_logs.files }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
|
||||
- name: Create metadata file for archive
|
||||
copy:
|
||||
dest: "{{ temp_archive_dir }}/metadata.json"
|
||||
content: >-
|
||||
{
|
||||
"hostname": "{{ ansible_hostname }}",
|
||||
"ip_address": "{{ ansible_default_ipv4.address }}",
|
||||
"fqdn": "{{ ansible_fqdn }}",
|
||||
"os": "{{ ansible_distribution }} {{ ansible_distribution_version }}",
|
||||
"kernel": "{{ ansible_kernel }}",
|
||||
"architecture": "{{ ansible_architecture }}",
|
||||
"collection_date": "{{ ansible_date_time.iso8601 }}",
|
||||
"log_files_count": {{ rotated_logs.matched }},
|
||||
"source_directory": "{{ log_directory }}",
|
||||
"archive_pattern": "{{ archive_pattern }}"
|
||||
}
|
||||
mode: '0644'
|
||||
|
||||
- name: Create tar archive of organized logs
|
||||
archive:
|
||||
path: "{{ temp_archive_dir }}/*"
|
||||
dest: "/tmp/{{ archive_filename }}"
|
||||
format: gz
|
||||
mode: '0600'
|
||||
|
||||
- name: Calculate archive size
|
||||
stat:
|
||||
path: "/tmp/{{ archive_filename }}"
|
||||
register: archive_stat
|
||||
|
||||
- name: Create remote storage directory structure
|
||||
file:
|
||||
path: "{{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Fetch archive to localhost
|
||||
fetch:
|
||||
src: "/tmp/{{ archive_filename }}"
|
||||
dest: "{{ local_temp_dir }}/{{ archive_filename }}"
|
||||
flat: true
|
||||
|
||||
- name: Copy archive to remote storage location
|
||||
copy:
|
||||
src: "{{ local_temp_dir }}/{{ archive_filename }}"
|
||||
dest: "{{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}/{{ archive_filename }}"
|
||||
mode: '0644'
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Verify archive was transferred successfully
|
||||
stat:
|
||||
path: "{{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}/{{ archive_filename }}"
|
||||
register: remote_archive_stat
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Remove original rotated log files after successful transfer
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ rotated_logs.files }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
when: remote_archive_stat.stat.exists
|
||||
|
||||
- name: Clean up temporary directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
loop:
|
||||
- "{{ temp_archive_dir }}"
|
||||
- "/tmp/{{ archive_filename }}"
|
||||
failed_when: false
|
||||
|
||||
- name: Generate archive report
|
||||
copy:
|
||||
dest: "{{ output_file }}"
|
||||
content: >-
|
||||
{
|
||||
"hostname": "{{ ansible_hostname }}",
|
||||
"ip_address": "{{ ansible_default_ipv4.address }}",
|
||||
"os": "{{ ansible_distribution }} {{ ansible_distribution_version }}",
|
||||
"archive_date": "{{ ansible_date_time.iso8601 }}",
|
||||
"log_directory": "{{ log_directory }}",
|
||||
"archive_pattern": "{{ archive_pattern }}",
|
||||
"logs_archived": {{ rotated_logs.matched }},
|
||||
"archive_filename": "{{ archive_filename }}",
|
||||
"archive_size_bytes": {{ archive_stat.stat.size | default(0) }},
|
||||
"archive_size_human": "{{ archive_stat.stat.size | default(0) | human_readable }}",
|
||||
"remote_storage_path": "{{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}/{{ archive_filename }}",
|
||||
"transfer_successful": {{ remote_archive_stat.stat.exists | default(false) }},
|
||||
"original_logs_deleted": {{ remote_archive_stat.stat.exists | default(false) }}
|
||||
}
|
||||
mode: '0600'
|
||||
|
||||
- name: Display archive summary
|
||||
debug:
|
||||
msg:
|
||||
- "Log archive completed on {{ ansible_hostname }}"
|
||||
- "Files archived: {{ rotated_logs.matched }}"
|
||||
- "Archive size: {{ archive_stat.stat.size | default(0) | human_readable }}"
|
||||
- "Remote location: {{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}/{{ archive_filename }}"
|
||||
- "Transfer successful: {{ remote_archive_stat.stat.exists | default(false) }}"
|
||||
- "Original logs deleted: {{ remote_archive_stat.stat.exists | default(false) }}"
|
||||
- "Report saved to: {{ output_file }}"
|
||||
|
||||
- name: Return archive findings
|
||||
set_fact:
|
||||
log_archive_report:
|
||||
hostname: ansible_hostname
|
||||
ip_address: ansible_default_ipv4.address
|
||||
os: ansible_distribution + ' ' + ansible_distribution_version
|
||||
logs_archived: rotated_logs.matched
|
||||
archive_filename: archive_filename
|
||||
archive_size_bytes: archive_stat.stat.size | default(0)
|
||||
remote_storage_path: "{{ remote_storage_path }}/{{ ansible_date_time.year }}/{{ ansible_date_time.month }}/{{ ansible_date_time.day }}/{{ archive_filename }}"
|
||||
transfer_successful: remote_archive_stat.stat.exists | default(false)
|
||||
archive_date: ansible_date_time.iso8601
|
||||
report_file: output_file
|
||||
199
playbooks/check_updates.yml
Normal file
199
playbooks/check_updates.yml
Normal file
@ -0,0 +1,199 @@
|
||||
---
|
||||
- name: Check Package Updates with Risk Assessment
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
vars:
|
||||
openai_api_key: "{{ lookup('env', 'OPENAI_API_KEY') }}"
|
||||
openai_api_url: "https://api.openai.com/v1/chat/completions"
|
||||
openai_model: "gpt-4o"
|
||||
output_file: "/tmp/update_report_{{ ansible_date_time.iso8601_basic_short }}.json"
|
||||
temp_update_file: "/tmp/available_updates.json"
|
||||
|
||||
tasks:
|
||||
- name: Validate OpenAI API key is present
|
||||
fail:
|
||||
msg: "OPENAI_API_KEY environment variable is required"
|
||||
when: openai_api_key | length == 0
|
||||
|
||||
- name: Detect OS family and set package manager
|
||||
set_fact:
|
||||
pkg_mgr: "{{ 'apt' if ansible_os_family == 'Debian' else 'apk' if ansible_os_family == 'Alpine' else 'unknown' }}"
|
||||
|
||||
- name: Update package cache (Debian/Ubuntu)
|
||||
apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Update package cache (Alpine)
|
||||
apk:
|
||||
update_cache: true
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: List upgradable packages (Debian/Ubuntu)
|
||||
shell: apt list --upgradable 2>/dev/null | tail -n +2 | awk -F'/' '{print $1 "\t" $2}'
|
||||
register: upgradable_debian
|
||||
changed_when: false
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: List upgradable packages (Alpine)
|
||||
shell: apk version -l '<'
|
||||
register: upgradable_alpine
|
||||
changed_when: false
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Parse upgradable packages (Debian/Ubuntu)
|
||||
set_fact:
|
||||
upgradable_packages: >-
|
||||
{{ upgradable_debian.stdout.split('\n') | select('match', '^.+\t.+$') |
|
||||
map('regex_replace', '^(.+?)\\t(.+)$', '{\"name\": \"\\1\", \"new_version\": \"\\2\"}') |
|
||||
map('from_json') | list }}
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Parse upgradable packages (Alpine)
|
||||
set_fact:
|
||||
upgradable_packages: >-
|
||||
{{ upgradable_alpine.stdout.split('\n') | select('match', '^.+\s+<\s+.+$') |
|
||||
map('regex_replace', '^(.+?)\\s+<\\s+(.+)$', '{\"name\": \"\\1\", \"new_version\": \"\\2\"}') |
|
||||
map('from_json') | list }}
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Get current versions of upgradable packages (Debian/Ubuntu)
|
||||
shell: dpkg-query -W -f='${Package}\t${Version}\n' {{ item.name }}
|
||||
register: current_versions_debian
|
||||
changed_when: false
|
||||
loop: "{{ upgradable_packages }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Get current versions of upgradable packages (Alpine)
|
||||
shell: apk info -vv | grep "{{ item.name }}-" | awk '{print $1}'
|
||||
register: current_versions_alpine
|
||||
changed_when: false
|
||||
loop: "{{ upgradable_packages }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Build complete package update list (Debian/Ubuntu)
|
||||
set_fact:
|
||||
package_update_list: >-
|
||||
{{ current_versions_debian.results |
|
||||
map(attribute='stdout') |
|
||||
zip(upgradable_packages) |
|
||||
map('regex_replace', '^(.+?)\\t(.+)$', '{\"name\": \"\\1\", \"current_version\": \"\\2\"}') |
|
||||
map('from_json') |
|
||||
product(upgradable_packages) |
|
||||
map('combine') |
|
||||
list }}
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Build complete package update list (Alpine)
|
||||
set_fact:
|
||||
package_update_list: >-
|
||||
{{ current_versions_alpine.results |
|
||||
map(attribute='stdout') |
|
||||
zip(upgradable_packages) |
|
||||
map('regex_replace', '^(.+?)-([0-9].+)$', '{\"name\": \"\\1\", \"current_version\": \"\\2\"}') |
|
||||
map('from_json') |
|
||||
product(upgradable_packages) |
|
||||
map('combine') |
|
||||
list }}
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Prepare package list for OpenAI analysis
|
||||
set_fact:
|
||||
packages_for_analysis: >-
|
||||
{{ package_update_list | map('to_json') | join('\n') }}
|
||||
|
||||
- name: Create OpenAI prompt for risk assessment
|
||||
set_fact:
|
||||
openai_prompt: >-
|
||||
Analyze the following package updates for potential breaking changes or disruptions.
|
||||
Identify which packages might cause issues based on version changes.
|
||||
Return a JSON array with package names and a boolean "risk" field (true if risky, false if safe).
|
||||
Packages:
|
||||
{{ packages_for_analysis }}
|
||||
|
||||
- name: Send request to OpenAI for risk assessment
|
||||
uri:
|
||||
url: "{{ openai_api_url }}"
|
||||
method: POST
|
||||
headers:
|
||||
Authorization: "Bearer {{ openai_api_key }}"
|
||||
Content-Type: "application/json"
|
||||
body_format: json
|
||||
body:
|
||||
model: "{{ openai_model }}"
|
||||
messages:
|
||||
- role: system
|
||||
content: "You are a package update risk assessment assistant. Analyze package updates and identify potential breaking changes or disruptions. Return only valid JSON."
|
||||
- role: user
|
||||
content: "{{ openai_prompt }}"
|
||||
temperature: 0.3
|
||||
response_format: { "type": "json_object" }
|
||||
register: openai_response
|
||||
until: openai_response.status == 200
|
||||
retries: 3
|
||||
delay: 5
|
||||
failed_when: openai_response.status != 200
|
||||
|
||||
- name: Parse OpenAI risk assessment
|
||||
set_fact:
|
||||
risk_assessment: "{{ openai_response.json.choices[0].message.content | from_json }}"
|
||||
|
||||
- name: Merge risk assessment with package list
|
||||
set_fact:
|
||||
packages_with_risk: >-
|
||||
{{ package_update_list |
|
||||
map('combine', {'risk': risk_assessment | selectattr('name', 'equalto', item.name) | map(attribute='risk') | first | default(false)}) |
|
||||
list }}
|
||||
loop: "{{ package_update_list }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
|
||||
- name: Separate safe and risky packages
|
||||
set_fact:
|
||||
safe_updates: "{{ packages_with_risk | selectattr('risk', 'equalto', false) | list }}"
|
||||
risky_updates: "{{ packages_with_risk | selectattr('risk', 'equalto', true) | list }}"
|
||||
|
||||
- name: Generate update report
|
||||
copy:
|
||||
dest: "{{ output_file }}"
|
||||
content: >-
|
||||
{
|
||||
"hostname": "{{ ansible_hostname }}",
|
||||
"ip_address": "{{ ansible_default_ipv4.address }}",
|
||||
"os": "{{ ansible_distribution }} {{ ansible_distribution_version }}",
|
||||
"scan_date": "{{ ansible_date_time.iso8601 }}",
|
||||
"total_updatable_packages": {{ packages_with_risk | length }},
|
||||
"safe_updates_count": {{ safe_updates | length }},
|
||||
"risky_updates_count": {{ risky_updates | length }},
|
||||
"safe_updates": {{ safe_updates | to_json }},
|
||||
"risky_updates": {{ risky_updates | to_json }},
|
||||
"can_proceed_with_update": {{ risky_updates | length == 0 }}
|
||||
}
|
||||
mode: '0600'
|
||||
|
||||
- name: Display update summary
|
||||
debug:
|
||||
msg:
|
||||
- "Total upgradable packages: {{ packages_with_risk | length }}"
|
||||
- "Safe updates: {{ safe_updates | length }}"
|
||||
- "Risky updates: {{ risky_updates | length }}"
|
||||
- "Can proceed with automatic update: {{ risky_updates | length == 0 }}"
|
||||
- "Report saved to: {{ output_file }}"
|
||||
|
||||
- name: Return update findings
|
||||
set_fact:
|
||||
update_report:
|
||||
hostname: ansible_hostname
|
||||
ip_address: ansible_default_ipv4.address
|
||||
os: ansible_distribution + ' ' + ansible_distribution_version
|
||||
total_updatable_packages: packages_with_risk | length
|
||||
safe_updates: safe_updates
|
||||
risky_updates: risky_updates
|
||||
can_proceed_with_update: risky_updates | length == 0
|
||||
scan_date: ansible_date_time.iso8601
|
||||
report_file: output_file
|
||||
169
playbooks/cleanup_docker.yml
Normal file
169
playbooks/cleanup_docker.yml
Normal file
@ -0,0 +1,169 @@
|
||||
---
|
||||
- name: Docker System Cleanup
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
vars:
|
||||
docker_prune_dangling: true
|
||||
docker_prune_images: true
|
||||
docker_prune_containers: true
|
||||
docker_prune_volumes: false
|
||||
docker_prune_build_cache: true
|
||||
output_file: "/tmp/docker_cleanup_report_{{ ansible_date_time.iso8601_basic_short }}.json"
|
||||
|
||||
tasks:
|
||||
- name: Check if Docker is installed
|
||||
command: docker --version
|
||||
register: docker_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Skip cleanup if Docker is not installed
|
||||
fail:
|
||||
msg: "Docker is not installed on this host"
|
||||
when: docker_check.rc != 0
|
||||
|
||||
- name: Get Docker system information before cleanup
|
||||
command: docker system df
|
||||
register: docker_df_before
|
||||
changed_when: false
|
||||
|
||||
- name: Parse Docker disk usage before cleanup
|
||||
set_fact:
|
||||
docker_disk_before: >-
|
||||
{{
|
||||
docker_disk_before | default({}) | combine({
|
||||
'images_total': docker_df_before.stdout | regex_search('Images\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'containers_total': docker_df_before.stdout | regex_search('Containers\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'local_volumes_total': docker_df_before.stdout | regex_search('Local Volumes\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'build_cache_total': docker_df_before.stdout | regex_search('Build Cache\\s+(\\d+)', '\\1') | first | default(0) | int
|
||||
})
|
||||
}}
|
||||
|
||||
- name: Remove dangling Docker images
|
||||
docker_prune:
|
||||
images: true
|
||||
images_filters:
|
||||
dangling: true
|
||||
register: prune_dangling
|
||||
when: docker_prune_dangling
|
||||
failed_when: false
|
||||
|
||||
- name: Remove unused Docker images
|
||||
docker_prune:
|
||||
images: true
|
||||
images_filters:
|
||||
dangling: false
|
||||
register: prune_images
|
||||
when: docker_prune_images
|
||||
failed_when: false
|
||||
|
||||
- name: Remove stopped Docker containers
|
||||
docker_prune:
|
||||
containers: true
|
||||
register: prune_containers
|
||||
when: docker_prune_containers
|
||||
failed_when: false
|
||||
|
||||
- name: Remove unused Docker volumes
|
||||
docker_prune:
|
||||
volumes: true
|
||||
register: prune_volumes
|
||||
when: docker_prune_volumes
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Docker build cache
|
||||
docker_prune:
|
||||
builder_cache: true
|
||||
register: prune_build_cache
|
||||
when: docker_prune_build_cache
|
||||
failed_when: false
|
||||
|
||||
- name: Perform full Docker system prune
|
||||
community.docker.docker_prune:
|
||||
images: true
|
||||
containers: true
|
||||
networks: false
|
||||
volumes: false
|
||||
builder_cache: true
|
||||
register: system_prune
|
||||
failed_when: false
|
||||
|
||||
- name: Get Docker system information after cleanup
|
||||
command: docker system df
|
||||
register: docker_df_after
|
||||
changed_when: false
|
||||
|
||||
- name: Parse Docker disk usage after cleanup
|
||||
set_fact:
|
||||
docker_disk_after: >-
|
||||
{{
|
||||
docker_disk_after | default({}) | combine({
|
||||
'images_total': docker_df_after.stdout | regex_search('Images\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'containers_total': docker_df_after.stdout | regex_search('Containers\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'local_volumes_total': docker_df_after.stdout | regex_search('Local Volumes\\s+(\\d+)', '\\1') | first | default(0) | int,
|
||||
'build_cache_total': docker_df_after.stdout | regex_search('Build Cache\\s+(\\d+)', '\\1') | first | default(0) | int
|
||||
})
|
||||
}}
|
||||
|
||||
- name: Calculate space reclaimed
|
||||
set_fact:
|
||||
space_reclaimed: >-
|
||||
{{
|
||||
{
|
||||
'images_reclaimed': docker_disk_before.images_total - docker_disk_after.images_total,
|
||||
'containers_reclaimed': docker_disk_before.containers_total - docker_disk_after.containers_total,
|
||||
'volumes_reclaimed': docker_disk_before.local_volumes_total - docker_disk_after.local_volumes_total,
|
||||
'build_cache_reclaimed': docker_disk_before.build_cache_total - docker_disk_after.build_cache_total
|
||||
}
|
||||
}}
|
||||
|
||||
- name: Generate cleanup report
|
||||
copy:
|
||||
dest: "{{ output_file }}"
|
||||
content: >-
|
||||
{
|
||||
"hostname": "{{ ansible_hostname }}",
|
||||
"ip_address": "{{ ansible_default_ipv4.address }}",
|
||||
"os": "{{ ansible_distribution }} {{ ansible_distribution_version }}",
|
||||
"cleanup_date": "{{ ansible_date_time.iso8601 }}",
|
||||
"before_cleanup": {
|
||||
"images": {{ docker_disk_before.images_total | default(0) }},
|
||||
"containers": {{ docker_disk_before.containers_total | default(0) }},
|
||||
"volumes": {{ docker_disk_before.local_volumes_total | default(0) }},
|
||||
"build_cache": {{ docker_disk_before.build_cache_total | default(0) }}
|
||||
},
|
||||
"after_cleanup": {
|
||||
"images": {{ docker_disk_after.images_total | default(0) }},
|
||||
"containers": {{ docker_disk_after.containers_total | default(0) }},
|
||||
"volumes": {{ docker_disk_after.local_volumes_total | default(0) }},
|
||||
"build_cache": {{ docker_disk_after.build_cache_total | default(0) }}
|
||||
},
|
||||
"reclaimed": {
|
||||
"images": {{ space_reclaimed.images_reclaimed | default(0) }},
|
||||
"containers": {{ space_reclaimed.containers_reclaimed | default(0) }},
|
||||
"volumes": {{ space_reclaimed.volumes_reclaimed | default(0) }},
|
||||
"build_cache": {{ space_reclaimed.build_cache_reclaimed | default(0) }}
|
||||
}
|
||||
}
|
||||
mode: '0600'
|
||||
|
||||
- name: Display cleanup summary
|
||||
debug:
|
||||
msg:
|
||||
- "Docker cleanup completed on {{ ansible_hostname }}"
|
||||
- "Images reclaimed: {{ space_reclaimed.images_reclaimed }}"
|
||||
- "Containers reclaimed: {{ space_reclaimed.containers_reclaimed }}"
|
||||
- "Build cache reclaimed: {{ space_reclaimed.build_cache_reclaimed }}"
|
||||
- "Report saved to: {{ output_file }}"
|
||||
|
||||
- name: Return cleanup findings
|
||||
set_fact:
|
||||
docker_cleanup_report:
|
||||
hostname: ansible_hostname
|
||||
ip_address: ansible_default_ipv4.address
|
||||
os: ansible_distribution + ' ' + ansible_distribution_version
|
||||
before: docker_disk_before
|
||||
after: docker_disk_after
|
||||
reclaimed: space_reclaimed
|
||||
cleanup_date: ansible_date_time.iso8601
|
||||
report_file: output_file
|
||||
110
playbooks/scan_cves.yml
Normal file
110
playbooks/scan_cves.yml
Normal file
@ -0,0 +1,110 @@
|
||||
---
|
||||
- name: Identify Packages with CVE Vulnerabilities
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
vars:
|
||||
cve_nvd_api_url: "https://services.nvd.nist.gov/rest/json/cves/2.0"
|
||||
output_file: "/tmp/cve_report_{{ ansible_date_time.iso8601_basic_short }}.json"
|
||||
results_per_page: 2000
|
||||
|
||||
tasks:
|
||||
- name: Detect OS family and set package manager
|
||||
set_fact:
|
||||
pkg_mgr: "{{ 'apt' if ansible_os_family == 'Debian' else 'apk' if ansible_os_family == 'Alpine' else 'unknown' }}"
|
||||
|
||||
- name: Ensure required packages are installed
|
||||
package:
|
||||
name:
|
||||
- curl
|
||||
- jq
|
||||
state: present
|
||||
|
||||
- name: Get installed packages with versions (Debian/Ubuntu)
|
||||
command: dpkg-query -W -f='${Package}\t${Version}\n'
|
||||
register: installed_packages_debian
|
||||
changed_when: false
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Get installed packages with versions (Alpine)
|
||||
command: apk info -vv
|
||||
register: installed_packages_alpine
|
||||
changed_when: false
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Parse package list into dictionary
|
||||
set_fact:
|
||||
package_dict: "{{ installed_packages_debian.stdout | default('') | split('\n') | select('match', '^.+\t.+$') | map('regex_replace', '^(.+?)\\t(.+)$', '{\"name\": \"\\1\", \"version\": \"\\2\"}') | map('from_json') | list }}"
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Parse Alpine package list into dictionary
|
||||
set_fact:
|
||||
package_dict: "{{ installed_packages_alpine.stdout | default('') | split('\n') | select('match', '^.+-.+$') | map('regex_replace', '^(.+?)-([0-9].+)$', '{\"name\": \"\\1\", \"version\": \"\\2\"}') | map('from_json') | list }}"
|
||||
when: ansible_os_family == 'Alpine'
|
||||
|
||||
- name: Query NVD CVE database for each package
|
||||
uri:
|
||||
url: "{{ cve_nvd_api_url }}"
|
||||
method: GET
|
||||
return_content: true
|
||||
validate_certs: false
|
||||
headers:
|
||||
User-Agent: "Ansible-CVE-Scanner/1.0"
|
||||
register: nvd_response
|
||||
failed_when: false
|
||||
until: nvd_response.status == 200
|
||||
retries: 3
|
||||
delay: 2
|
||||
|
||||
- name: Extract CVE data from NVD response
|
||||
set_fact:
|
||||
cve_data: "{{ nvd_response.content | from_json | json_query('vulnerabilities[*]') }}"
|
||||
when: nvd_response.status == 200
|
||||
|
||||
- name: Match CVEs with installed packages
|
||||
set_fact:
|
||||
cve_findings: >-
|
||||
{{ cve_findings | default([]) +
|
||||
[{
|
||||
'package': item.package_name,
|
||||
'version': item.version,
|
||||
'cves': cve_data | selectattr('cve.id', 'defined') |
|
||||
selectattr('cve.descriptions[*].value', 'contains', item.package_name) |
|
||||
map(attribute='cve') | list,
|
||||
'hostname': ansible_hostname,
|
||||
'ip_address': ansible_default_ipv4.address,
|
||||
'os': ansible_distribution + ' ' + ansible_distribution_version,
|
||||
'scan_date': ansible_date_time.iso8601
|
||||
}]
|
||||
}}
|
||||
loop: "{{ package_dict }}"
|
||||
loop_control:
|
||||
loop_var: item
|
||||
vars:
|
||||
package_name: "{{ item.name }}"
|
||||
version: "{{ item.version }}"
|
||||
|
||||
- name: Filter packages with CVEs
|
||||
set_fact:
|
||||
affected_packages: "{{ cve_findings | selectattr('cves', 'defined') | selectattr('cves', 'length', 'gt', 0) | list }}"
|
||||
|
||||
- name: Generate CVE report JSON
|
||||
copy:
|
||||
dest: "{{ output_file }}"
|
||||
content: "{{ affected_packages | to_json(indent=2) }}"
|
||||
mode: '0600'
|
||||
|
||||
- name: Display CVE summary
|
||||
debug:
|
||||
msg: "Found {{ affected_packages | length }} packages with CVEs. Report saved to {{ output_file }}"
|
||||
|
||||
- name: Return CVE findings
|
||||
set_fact:
|
||||
cve_report:
|
||||
hostname: ansible_hostname
|
||||
ip_address: ansible_default_ipv4.address
|
||||
os: ansible_distribution + ' ' + ansible_distribution_version
|
||||
total_packages: package_dict | length
|
||||
packages_with_cves: affected_packages | length
|
||||
findings: affected_packages
|
||||
scan_date: ansible_date_time.iso8601
|
||||
report_file: output_file
|
||||
8
requirements.yml
Normal file
8
requirements.yml
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
collections:
|
||||
- name: community.general
|
||||
version: ">=8.0.0"
|
||||
- name: community.docker
|
||||
version: ">=3.0.0"
|
||||
|
||||
roles: []
|
||||
Loading…
x
Reference in New Issue
Block a user