The following examples illustrate typical scenarios of manipulating HDF files and demonstrate how much cleaner it is to solve these in HDFql (across different languages). Additionally, a quick start guide covering the most commonly used HDFql operations can be found here
, and the complete reference manual is available here
- Find all datasets existing in an HDF file named "data.h5" that start with "temperature" and are of type float
- For each dataset found, print its name and read its data
- Write the data into a file named "output.txt" in an ascending order
- Each value (belonging to the data) is written in a new line using a UNIX-based end of line (EOL) terminator
- Create an HDF file named "painters.h5"
- Inside the file, create a group named "picasso" that tracks the creation order of objects stored within it
- Inside the group, create a dataset named "guernica" of type integer of two dimensions with size 200x150
- The dataset is organized through chunks of 40x30 and uses Fletcher32 checksum for error detection
- Write an incremented value (starting from 0) in each position of the dataset
- Inside the dataset, create an attribute named "subject" of type UTF8 variable char with the value "guerra civil española"
- Create an HDF file named "my_file.h5"
- Inside the file, create a dataset named "my_dataset" of type short of two dimensions with size UNLIMITEDx1024
- The dataset is extendible on its first dimension (to store an unknown volume of data) and compressed with ZLIB
- Acquire data from a process which returns an array of 1024 shorts on each reading
- Write the acquired data into the dataset using hyperslab functionalities (so that already stored data is not overwritten)