Skip to content

Commit 8a59926

Browse files
Copilotchantreux
andauthored
Add temporal resolution and interpolation method to directory structure, reorganize scripts (#3)
* Initial plan * Add temporal_resolution and interpolation columns to directory structure Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Update README with comprehensive directory structure documentation Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Fix typo: montlhy_request -> monthly_request Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Add script to create folder structure from CSV files Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Simplify structure: move interpolated data to derived product_type Interpolated data is now stored as product_type='derived' with non-native interpolation methods (e.g., gr006). This eliminates the separate 'interpolated' product_type since interpolation method is already specified in the interpolation column. Changes: - CSV: Changed product_type from 'interpolated' to 'derived' for interpolated data - Scripts: Updated to check interpolation != 'native' instead of product_type == 'interpolated' - Catalogues: Removed 'interpolated' from type_data_list - Documentation: Updated README with new structure examples - Path structure now: derived/{dataset}/{temporal_resolution}/{gr006}/{variable}/ Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * CERRA data is hourly not 3horuly * add file_name for intepolation * Refactor path loading into utility functions Created utility functions in utils.py and operations.py to centralize path construction logic: - build_output_path(): Construct paths from parameters - load_output_path_from_row(): Get output path from CSV row - load_input_path_from_row(): Get input path from CSV row - load_path_from_df(): Lookup path from DataFrame Updated all scripts to use these utilities instead of manually constructing paths: - derived/reanalysis-era5-single-levels.py - interpolation/reanalysis-cerra-single-levels.py - derived/reanalysis-cerra-land_accumulation.py Benefits: - Centralized path logic reduces code duplication - Easier to maintain and modify path structure - Consistent path handling across all scripts Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Consolidate path functions in utils.py only Removed path construction functions from operations.py and updated all scripts to import from utils.py instead: Changes: - Removed from operations.py: build_output_path, load_output_path_from_row, load_input_path_from_row, load_path_from_df - Kept in operations.py: sfcwind_from_u_v, resample_to_daily (data processing functions only) - Updated derived/reanalysis-era5-single-levels.py to import path functions from utils - Updated derived/reanalysis-cerra-land_accumulation.py to import path functions from utils - interpolation/reanalysis-cerra-single-levels.py already imports from utils Result: Single source of truth for path construction logic in utils.py Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * CERRA-Land ends in 2020 * New paths update * Update produce_catalog.py to use utils.py path functions Updated catalogues/produce_catalog.py to use the centralized path construction functions from utils.py: Changes: - Import build_output_path and load_output_path_from_row from utils - Updated create_auxiliar_df() to use load_output_path_from_row() for data_path - Updated create_auxiliar_df() to use build_output_path() for origin_path - Added temporal_resolution column to output DataFrame - For derived data, origin_path now correctly points to raw/native source - For raw data, origin_path is the same as data_path (unless from CDS) This ensures consistent path generation across all scripts using the new directory structure. Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * update cataloge production * Reorganize script structure into scripts/ subdirectories Reorganized all Python scripts into a centralized scripts/ directory structure: Directory structure: - scripts/download/ - Dataset download scripts (moved from scripts/) - scripts/utilities/ - Utility functions and helpers (utils.py, create_folder_structure.py) - scripts/derived/ - Derived variable calculation scripts (moved from derived/) - scripts/interpolation/ - Interpolation scripts (moved from interpolation/) - scripts/notebooks/ - Jupyter notebooks (moved from notebooks/) - scripts/standardization/ - Standardization scripts (moved from standardization/) - scripts/catalogue/ - Catalogue generation scripts (moved from catalogues/) Changes: - Moved all download scripts to scripts/download/ - Moved utils.py to scripts/utilities/ - Moved create_folder_structure.py to scripts/utilities/ - Moved derived/ scripts to scripts/derived/ - Moved interpolation/ scripts to scripts/interpolation/ - Moved notebooks/ to scripts/notebooks/ - Moved standardization/ to scripts/standardization/ - Moved catalogue Python scripts to scripts/catalogue/ - Updated all imports: sys.path.append('../utilities') - Updated all path references: ../requests -> ../../requests - Updated catalogue paths to point to ../../catalogues/ - Updated CSV files with new script paths - Updated launch_all_requests_scripts.sh and catalog_executor.sh - Removed empty old directories Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * Update Readme Removed detailed path component descriptions and examples from the README. * Reorganize Readme * Update REPOSITORY_SUMMARY with recent changes Updated REPOSITORY_SUMMARY.md to reflect: - New directory structure with temporal_resolution and interpolation metadata - Reorganized scripts into scripts/ subdirectories (download, utilities, derived, interpolation, standardization, catalogue, notebooks) - Updated CSV columns (temporal_resolution, interpolation, interpolation_file) - Centralized path construction utilities in scripts/utilities/utils.py - Interpolated data now stored as derived with non-native interpolation methods - Updated usage examples with new script paths - Added create_folder_structure.py utility documentation Kept document concise while covering all major architectural changes. Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> * correct cerra-land download script path in request * add E-OBS * Correct E-OBS download script * update E-OBS naming * My local changes before merge --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: chantreux <133343833+chantreux@users.noreply.github.com> Co-authored-by: Adrián Chantreux Fermoso <chantreuxa@predictia.es>
1 parent 78ace9e commit 8a59926

40 files changed

+964
-646
lines changed

README.md

Lines changed: 46 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,57 @@
22
This repository contains scripts to download, preprocess, standardize, and consolidate the catalogues available in the CDS.
33
## Environment:
44
The environment is the one used in the c3s-atlas user tools: https://github.com/ecmwf-projects/c3s-atlas/blob/main/environment.yml
5-
## Filename format:
6-
Format of the files is "{var}\_{dataset}\_{date}.nc"
7-
With date:
8-
- "{year}{month}" for big datasets like CERRA saved month by month (download is faster this way).
9-
- "{year}" for the other datasets that are saved year by year.
5+
106
## Contents
117

128
| Directory | Contents |
139
| :-------- | :------- |
14-
| [requests](https://github.com/SantanderMetGroup/c3s-cds/tree/main/requests) | Contains one CSV file per CDS catalogue, listing the requested variables, the target save directory, and whether the variable is raw or requires post-processing to be standardized.
15-
| [scripts](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts) | Python scripts to download data from the CDS.
10+
| [requests](https://github.com/SantanderMetGroup/c3s-cds/tree/main/requests) | Contains one CSV file per CDS catalogue, listing the requested variables, temporal resolution, interpolation method, the target save directory, and whether the variable is raw or requires post-processing to be standardized.
1611
| [provenance](https://github.com/SantanderMetGroup/c3s-cds/tree/main/provenance) | Contains one JSON file per catalogue describing the provenance and definitions of each variable.
17-
| [standardization](https://github.com/SantanderMetGroup/c3s-cds/tree/main/standardization) | Python recipes to standardize the variables.
18-
| [derived](https://github.com/SantanderMetGroup/c3s-cds/tree/main/derived) | Python recipes to calculate derived products from the variables.
19-
| [interpolation](https://github.com/SantanderMetGroup/c3s-cds/tree/main/interpolation) | Python recipes to interpolate the data using reference grids.
12+
| [scripts/download](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts/download) | Python scripts to download data from the CDS.
13+
| [scripts/standardization](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts/standardization) | Python recipes to standardize the variables.
14+
| [scripts/derived](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts/derived) | Python recipes to calculate derived products from the variables.
15+
| [scripts/interpolation](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts/interpolation) | Python recipes to interpolate data using reference grids.
16+
| [scripts/catalogue](https://github.com/SantanderMetGroup/c3s-cds/tree/main/scripts/catalogue) | Python recipes to produce the catalogues of downloaded data.
2017
| [catalogues](https://github.com/SantanderMetGroup/c3s-cds/tree/main/catalogues) | CSV catalogues of datasets consolidated in Lustre or GPFS. The catalogues are updated through a nightly CI job.
2118

2219

20+
## Downloaded data directory Structure
21+
22+
The repository uses a structured directory path format to organize downloaded, derived, and interpolated data:
23+
24+
```
25+
{base_path}/{product_type}/{dataset}/{temporal_resolution}/{interpolation}/{variable}/
26+
```
27+
28+
**Examples:**
29+
30+
1. **Raw ERA5 hourly wind components:**
31+
```
32+
/lustre/.../raw/reanalysis-era5-single-levels/hourly/native/u10/
33+
```
34+
35+
**Note:** Interpolated data is stored under `derived` with the `interpolation` field indicating the target grid (e.g., `gr006`). This distinguishes it from calculated variables which use `interpolation=native`.
36+
37+
## Filename format:
38+
Format of the files is "{var}\_{dataset}\_{date}.nc"
39+
With date:
40+
- "{year}{month}" for big datasets like CERRA saved month by month (download is faster this way).
41+
- "{year}" for the other datasets the data is saved year by year.
42+
43+
## Creating Directory Structure
44+
45+
Before downloading data, you can create the complete folder structure without downloading or calculating any data:
46+
47+
```bash
48+
# Preview what directories would be created (dry-run mode)
49+
python scripts/create_folder_structure.py --dry-run
50+
51+
# Create all directories
52+
python scripts/create_folder_structure.py
53+
```
54+
55+
The script reads all CSV files in the `requests/` directory and creates the directory structure according to the format:
56+
`{base_path}/{product_type}/{dataset}/{temporal_resolution}/{interpolation}/{variable}/`
57+
58+

REPOSITORY_SUMMARY.md

Lines changed: 91 additions & 82 deletions
Original file line numberDiff line numberDiff line change
@@ -18,52 +18,52 @@ The repository automates the process of:
1818
- Which variables to download
1919
- Year ranges
2020
- CDS API parameters
21-
- Output paths
22-
- Interpolation reference grid (if needed)
21+
- Output paths and temporal resolution
22+
- Interpolation method (native, gr006, etc.)
2323

2424
### 2️⃣ **Download Phase (Raw Data)**
2525
```
26-
requests/*.csv → scripts/*.py → CDS API → NetCDF files
26+
requests/*.csv → scripts/download/*.py → CDS API → NetCDF files
2727
```
2828
- Scripts read CSVs
2929
- Create CDS API requests
3030
- Download raw data as NetCDF files
31-
- Files saved as: `{var}_{dataset}_{year}.nc`
31+
- Files saved to: `{base}/{product_type}/{dataset}/{temporal_resolution}/{interpolation}/{variable}/`
3232

3333
### 3️⃣ **Derivation Phase**
3434
```
35-
Raw NetCDF → derived/*.py → Derived NetCDF
35+
Raw NetCDF → scripts/derived/*.py → Derived NetCDF
3636
```
3737
- Scripts identify "derived" variables from CSVs
3838
- Load necessary raw data files
3939
- Perform calculations (e.g., wind speed from components)
4040
- Resample to daily values if needed
41-
- Save derived variables
41+
- Save derived variables with temporal resolution metadata
4242

4343
### 4️⃣ **Interpolation Phase**
4444
```
45-
Raw/Derived NetCDF → interpolation/*.py → Interpolated NetCDF
45+
Raw NetCDF → scripts/interpolation/*.py → Interpolated NetCDF (stored as derived)
4646
```
47-
- Scripts identify "interpolated" variables from CSVs
48-
- Load reference grid specified in the `interpolation` column
47+
- Scripts identify variables needing interpolation from CSVs
48+
- Load reference grid specified in the `interpolation_file` column
4949
- Apply conservative interpolation to regrid data
50-
- Save interpolated variables to specified output path
50+
- Save to derived directory with interpolation method (e.g., gr006)
5151

5252
### 5️⃣ **Standardization Phase**
5353
```
54-
Derived/Raw NetCDF → standardization/*.py → Standardized NetCDF
54+
Derived/Raw NetCDF → scripts/standardization/*.py → Standardized NetCDF
5555
```
5656
- Apply unit conversions
5757
- Update metadata attributes
5858
- Ensure CF convention compliance
5959

60-
### 6️⃣ **Cataloguing raw downloaded data phase**
60+
### 6️⃣ **Cataloguing Phase**
6161
```
62-
All NetCDF files → catalogues/produce_catalog.py → CSV + PDF reports
62+
All NetCDF files → scripts/catalogue/produce_catalog.py → CSV + PDF reports
6363
```
64-
- Scan all raw output directories
64+
- Scan all output directories
6565
- Check file existence for each year
66-
- Generate availability reports
66+
- Generate availability reports with temporal resolution
6767
- Create visual heatmaps
6868
- Publish via GitHub Actions nightly
6969

@@ -79,80 +79,81 @@ Contains CSV files that define **what data to download**
7979
- `filename_variable`: Variable name for saved files
8080
- `cds_request_variable`: Variable name in CDS API
8181
- `cds_years_start/end`: Year range to download
82-
- `product_type`: `raw`, `derived`, or `interpolated`
83-
- `interpolation`: Reference grid filename (e.g., `land_sea_mask_0.0625degree.nc4`)
84-
- `output_path`: Where to save the data
82+
- `product_type`: `raw` or `derived` (interpolated data is stored as derived)
83+
- `temporal_resolution`: hourly, daily, 3hourly, 6hourly, monthly
84+
- `interpolation`: native (non-interpolated) or grid specification (e.g., gr006)
85+
- `interpolation_file`: Reference grid file for interpolation (if needed)
86+
- `output_path`: Base directory for saving data
8587
- `script`: Which Python script handles this dataset
8688

8789
**Example:** A row specifying to download u10 (10m wind u-component) for years 2022-2024 from ERA5.
8890

8991
---
9092

91-
### ⬇️ **scripts/**
93+
### 📂 **scripts/**
9294

93-
Python scripts that **download data from CDS**
95+
Organized directory containing all Python scripts:
96+
97+
#### **scripts/download/**
98+
Scripts that **download data from CDS**
9499
- One script per CDS catalogue (e.g., `reanalysis-era5-single-levels.py`)
95-
- Uses `utils.py` which provides:
96-
- `download_single_file()`: Downloads individual files via CDS API
97-
- `download_files()`: Orchestrates parallel downloads based on CSV configuration
98100
- Reads request CSVs and creates API requests
99-
- Saves files with format: `{variable}_{dataset}_{year}.nc`
100-
101-
**Workflow:**
102-
1. Read CSV from `requests/` directory
103-
2. For each variable marked as "raw", create CDS API requests
104-
3. Download files to specified output path
105-
4. Skip files that already exist
106-
107-
---
108-
109-
### 🔬 **derived/**
101+
- Downloads files to directory structure: `{base}/{product_type}/{dataset}/{temporal_resolution}/{interpolation}/{variable}/`
102+
- Skip files that already exist
103+
104+
#### **scripts/utilities/**
105+
Centralized utility functions
106+
- `utils.py`: Core functions for path construction and file downloads
107+
- `build_output_path()`: Constructs directory paths with temporal resolution and interpolation
108+
- `load_output_path_from_row()`: Extracts output path from CSV row
109+
- `load_input_path_from_row()`: Extracts input path from CSV row
110+
- `load_path_from_df()`: Lookup path for a variable in DataFrame
111+
- `download_files()`: Orchestrates parallel downloads based on CSV configuration
112+
- `create_folder_structure.py`: Creates complete directory structure from CSVs without downloading
110113

111-
Python scripts that **calculate derived variables** from raw data
114+
#### **scripts/derived/**
115+
Scripts that **calculate derived variables** from raw data
112116
- Example: `reanalysis-era5-single-levels.py` calculates:
113117
- `sfcwind` (wind speed) from `u10` and `v10` components using: `sfcwind = √(u10² + v10²)`
114118
- Uses `operations.py` which provides utility functions:
115119
- `sfcwind_from_u_v()`: Calculate wind speed from components
116120
- `resample_to_daily()`: Aggregate hourly data to daily statistics
117-
- `load_path_from_df()`: Load file paths from configuration
118121

119122
**Workflow:**
120123
1. Read CSV to identify variables marked as "derived"
121124
2. Load required raw data files
122125
3. Apply mathematical operations
123126
4. Resample to daily values if needed
124-
5. Save derived variables
127+
5. Save derived variables with new temporal resolution
125128

126-
---
127-
128-
### 🌐 **interpolation/**
129-
130-
Python scripts that **interpolate datasets to reference grids**
129+
#### **scripts/interpolation/**
130+
Scripts that **interpolate datasets to reference grids**
131131
- Example: `reanalysis-cerra-single-levels.py` interpolates CERRA data
132-
- Reference grid is specified in the `interpolation` column of request CSVs
132+
- Reference grid specified in `interpolation_file` column of request CSVs
133133
- Uses conservative interpolation method via xESMF
134-
- CERRA is the reference example for future dataset updates
135-
- Reference grids will be moved to a `resources/` folder in future updates
134+
- Saves to derived directory with interpolation method identifier (e.g., gr006)
136135

137136
**Workflow:**
138-
1. Read CSV to identify variables marked as "interpolated"
139-
2. Load reference grid from specified file (e.g., `land_sea_mask_0.0625degree.nc4`)
137+
1. Read CSV to identify variables needing interpolation (interpolation != 'native')
138+
2. Load reference grid from specified file
140139
3. Apply conservative_normed interpolation to regrid data
141-
4. Save interpolated variables to output path
142-
5. Skip files that already exist
140+
4. Save to: `{base}/derived/{dataset}/{temporal_resolution}/{interpolation}/{variable}/`
143141

144-
---
145-
146-
### 📏 **standardization/**
147-
148-
Python scripts that **standardize variables** to CF conventions
142+
#### **scripts/standardization/**
143+
Scripts that **standardize variables** to CF conventions
149144
- Example: `derived-era5-single-levels-daily-statistics.py` contains functions like:
150145
- `tp()`: Convert precipitation from m/day to kg/m²/s (flux)
151146
- `e()`: Convert evaporation with proper units and attributes
152147
- `ssrd()`: Convert solar radiation from J/m² to W/m²
153-
- Updates variable attributes (units, standard_name, long_name, etc.)
154148

155-
**Purpose:** Ensure data complies with Climate and Forecast (CF) metadata conventions and CMIP6 standards for interoperability.
149+
#### **scripts/catalogue/**
150+
Scripts that **generate visual catalogues** of available data
151+
- `produce_catalog.py`: Scans directories, creates CSV catalogues showing data availability, generates heatmap visualizations
152+
- `generate_resumen.py`: Creates summary reports
153+
- Output saved to `catalogues/catalogues/` and `catalogues/images/`
154+
155+
#### **scripts/notebooks/**
156+
Jupyter notebooks for **exploration and testing**
156157

157158
---
158159

@@ -181,28 +182,31 @@ JSON files documenting **metadata and provenance** for each variable
181182

182183
### 📊 **catalogues/**
183184

184-
Scripts that **generate visual catalogues** of available data
185-
- `produce_catalog.py` / `produce_catalog_v2.py`:
186-
- Scans output directories for downloaded files
187-
- Checks which years exist for each variable
188-
- Creates CSV catalogues showing data availability
189-
- Generates heatmap visualizations (PDF images)
190-
- `generate_resumen.py`: Creates summary reports
185+
Output directory for **catalogues and visualizations**
186+
- `catalogues/`: CSV files listing all variables, datasets, date ranges, file paths
187+
- `images/`: PDF heatmaps showing data availability (green=downloaded, orange=partial, red=missing)
191188
- Updated nightly via GitHub Actions CI/CD
192189

193-
**Output:**
194-
- CSV files: Lists all variables, datasets, date ranges, file paths
195-
- PDF heatmaps: Visual representation of data availability (green=downloaded, orange=partial, red=missing)
196-
197190
---
198191

199-
### 📓 **notebooks/**
192+
## Technical Details
200193

201-
Jupyter notebooks for **exploration and testing**
194+
### Directory Structure
195+
**Enhanced structure with temporal resolution and interpolation metadata:**
202196

203-
---
197+
```
198+
{base}/{product_type}/{dataset}/{temporal_resolution}/{interpolation}/{variable}/
199+
```
204200

205-
## Technical Details
201+
Where:
202+
- `product_type`: `raw` or `derived` (interpolated data stored as derived)
203+
- `temporal_resolution`: hourly, daily, 3hourly, 6hourly, monthly
204+
- `interpolation`: native (non-interpolated) or grid specification (e.g., gr006)
205+
206+
**Examples:**
207+
- Raw hourly ERA5: `/lustre/.../raw/reanalysis-era5-single-levels/hourly/native/u10/`
208+
- Derived daily wind: `/lustre/.../derived/reanalysis-era5-single-levels/daily/native/sfcwind/`
209+
- Interpolated CERRA: `/lustre/.../derived/reanalysis-cerra-single-levels/3hourly/gr006/t2m/`
206210

207211
### File Naming Convention
208212
**Format:** `{variable}_{dataset}_{date}.nc`
@@ -220,7 +224,7 @@ Jupyter notebooks for **exploration and testing**
220224
- `catalog_executor.yml`: Runs nightly to update catalogues
221225
- `run_all_requests_scripts.yml`: Can trigger download scripts
222226
- **SLURM scripts:**
223-
- `launch_all_requests_scripts.sh`: Batch job launcher for HPC environments
227+
- `scripts/download/launch_all_requests_scripts.sh`: Batch job launcher for HPC environments
224228
- Designed for cluster computing with job scheduling
225229

226230
## Supported Datasets
@@ -234,24 +238,29 @@ Jupyter notebooks for **exploration and testing**
234238

235239
### To download ERA5 data:
236240
1. Edit `requests/reanalysis-era5-single-levels.csv` to specify years and variables
237-
2. Run: `python scripts/reanalysis-era5-single-levels.py`
238-
3. Raw data downloads to specified `output_path`
241+
2. Run: `python scripts/download/reanalysis-era5-single-levels.py`
242+
3. Raw data downloads to: `{base}/raw/{dataset}/{temporal_resolution}/native/{variable}/`
239243

240244
### To calculate derived variables:
241245
1. Ensure raw data is downloaded
242-
2. Run: `python derived/reanalysis-era5-single-levels.py`
243-
3. Derived variables saved to derived directory
246+
2. Run: `python scripts/derived/reanalysis-era5-single-levels.py`
247+
3. Derived variables saved to: `{base}/derived/{dataset}/{temporal_resolution}/native/{variable}/`
244248

245249
### To interpolate data:
246250
1. Ensure raw data is downloaded
247-
2. Specify reference grid in the `interpolation` column of request CSV
248-
3. Run: `python interpolation/reanalysis-cerra-single-levels.py`
249-
4. Interpolated data saved to specified output path
251+
2. Specify reference grid in the `interpolation_file` column of request CSV
252+
3. Run: `python scripts/interpolation/reanalysis-cerra-single-levels.py`
253+
4. Interpolated data saved to: `{base}/derived/{dataset}/{temporal_resolution}/{grid_spec}/{variable}/`
254+
255+
### To create folder structure:
256+
1. Run: `python scripts/utilities/create_folder_structure.py --dry-run` (preview)
257+
2. Run: `python scripts/utilities/create_folder_structure.py` (create)
258+
3. Creates all directories based on CSV configurations without downloading
250259

251260
### To update catalogues:
252-
1. Run: `python catalogues/produce_catalog.py`
253-
2. Generates CSV catalogues and PDF visualizations
254-
3. Shows data availability status
261+
1. Run: `python scripts/catalogue/produce_catalog.py`
262+
2. Generates CSV catalogues and PDF visualizations in `catalogues/`
263+
3. Shows data availability status with temporal resolution metadata
255264

256265
## Integration
257266
This repository is part of the **C3S Atlas** ecosystem and uses the same conda environment. It serves as the data acquisition and preprocessing layer, providing standardized climate data for downstream analysis tools.

0 commit comments

Comments
 (0)