Datasets:
ArXiv:
License:
Rene
commited on
Commit
·
36450ea
1
Parent(s):
d8fc7b0
Added visualizer and updated readme file
Browse files- README.md +71 -2
- Visualizer.ipynb +0 -0
- data.png +3 -0
- training.png +3 -0
README.md
CHANGED
@@ -8,7 +8,8 @@ This dataset contains the data for the first test case (1D compressible SPH) for
|
|
8 |
|
9 |
You can find the full paper [here](https://arxiv.org/abs/2403.16680).
|
10 |
|
11 |
-
The source core repository is available [here](https://github.com/tum-pbs/SFBC/) and also contains information on the data generation
|
|
|
12 |
|
13 |
For the other test case datasets look here:
|
14 |
|
@@ -22,4 +23,72 @@ For the other test case datasets look here:
|
|
22 |
|
23 |
## File Layout
|
24 |
|
25 |
-
The datasets are stored as hdf5 files with a single file per experiment. Within each file there is a set of configuration parameters and each frame of the simulation stored separately as a group. Each frame contains information for all fluid particles and all potentially relevant information. For the 2D test cases there is a pre-defined test/train split on a simulation level, wheras the 1D and 3D cases do not contain such a split.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
You can find the full paper [here](https://arxiv.org/abs/2403.16680).
|
10 |
|
11 |
+
The source core repository is available [here](https://github.com/tum-pbs/SFBC/) and also contains information on the data generation. You can install our BasisConvolution framework simply by running
|
12 |
+
`pip install BasisConvolution`
|
13 |
|
14 |
For the other test case datasets look here:
|
15 |
|
|
|
23 |
|
24 |
## File Layout
|
25 |
|
26 |
+
The datasets are stored as hdf5 files with a single file per experiment. Within each file there is a set of configuration parameters and each frame of the simulation stored separately as a group. Each frame contains information for all fluid particles and all potentially relevant information. For the 2D test cases there is a pre-defined test/train split on a simulation level, wheras the 1D and 3D cases do not contain such a split.
|
27 |
+
|
28 |
+
|
29 |
+
## Demonstration
|
30 |
+
|
31 |
+
This repository contains a simple Jupyter notebook (Visualizer.ipynb) that loads the dataset in its current folder and visualizes it first:
|
32 |
+
|
33 |
+

|
34 |
+
|
35 |
+
And then runs a simple training on it to learn the SPH summation-based density for different basis functions:
|
36 |
+
|
37 |
+

|
38 |
+
|
39 |
+
## Minimum Working Example
|
40 |
+
|
41 |
+
Below you can find a fully work but simple example of loading our dataset, building a network (based on our SFBC framework) and doing a single network step. This relies on our SFBC/BasisConvolution framework that you can find [here](https://github.com/tum-pbs/SFBC/) or simply install it via pip (`pip install BasisConvolution`)
|
42 |
+
|
43 |
+
```py
|
44 |
+
from BasisConvolution.util.hyperparameters import parseHyperParameters, finalizeHyperParameters
|
45 |
+
from BasisConvolution.util.network import buildModel, runInference
|
46 |
+
from BasisConvolution.util.augment import loadAugmentedBatch
|
47 |
+
from BasisConvolution.util.arguments import parser
|
48 |
+
import shlex
|
49 |
+
import torch
|
50 |
+
from torch.utils.data import DataLoader
|
51 |
+
from BasisConvolution.util.dataloader import datasetLoader, processFolder
|
52 |
+
|
53 |
+
# Example arguments
|
54 |
+
args = parser.parse_args(shlex.split(f'--fluidFeatures constant:1 --boundaryFeatures constant:1 --groundTruth compute[rho]:constant:1/constant:rho0 --basisFunctions ffourier --basisTerms 4 --windowFunction "None" --maxUnroll 0 --frameDistance 0 --epochs 1'))
|
55 |
+
# Parse the arguments
|
56 |
+
hyperParameterDict = parseHyperParameters(args, None)
|
57 |
+
hyperParameterDict['device'] = 'cuda' # make sure to use a gpu if you can
|
58 |
+
hyperParameterDict['iterations'] = 2**10 # Works good enough for this toy problem
|
59 |
+
hyperParameterDict['batchSize'] = 4 # Automatic batched loading is supported
|
60 |
+
hyperParameterDict['boundary'] = False # Make sure the data loader does not expect boundary data (this yields a warning if not set)
|
61 |
+
|
62 |
+
# Build the dataset
|
63 |
+
datasetPath = 'dataset/train'
|
64 |
+
train_ds = datasetLoader(processFolder(hyperParameterDict, datasetPath))
|
65 |
+
# And its respective loader/iterator combo as a batch sampler (this is our preferred method)
|
66 |
+
train_loader = DataLoader(train_ds, shuffle=True, batch_size = hyperParameterDict['batchSize']).batch_sampler
|
67 |
+
train_iter = iter(train_loader)
|
68 |
+
# Align the hyperparameters with the dataset, e.g., dimensionality
|
69 |
+
finalizeHyperParameters(hyperParameterDict, train_ds)
|
70 |
+
# Build a model for the given hyperparameters
|
71 |
+
model, optimizer, scheduler = buildModel(hyperParameterDict, verbose = False)
|
72 |
+
# Get a batch of data
|
73 |
+
|
74 |
+
try:
|
75 |
+
bdata = next(train_iter)
|
76 |
+
except StopIteration:
|
77 |
+
train_iter = iter(train_loader)
|
78 |
+
bdata = next(train_iter)
|
79 |
+
# Load the data, the data loader does augmentation and neighbor searching automatically
|
80 |
+
configs, attributes, currentStates, priorStates, trajectoryStates = loadAugmentedBatch(bdata, train_ds, hyperParameterDict)
|
81 |
+
# Run the forward pass
|
82 |
+
optimizer.zero_grad()
|
83 |
+
predictions = runInference(currentStates, configs, model, verbose = False)
|
84 |
+
# Compute the Loss
|
85 |
+
gts = [traj[0]['fluid']['target'] for traj in trajectoryStates]
|
86 |
+
losses = [torch.nn.functional.mse_loss(prediction, gt) for prediction, gt in zip(predictions, gts)]
|
87 |
+
# Run the backward pass
|
88 |
+
loss = torch.stack(losses).mean()
|
89 |
+
loss.backward()
|
90 |
+
optimizer.step()
|
91 |
+
# Print the loss
|
92 |
+
print(loss.item())
|
93 |
+
print('Done')
|
94 |
+
```
|
Visualizer.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data.png
ADDED
![]() |
Git LFS Details
|
training.png
ADDED
![]() |
Git LFS Details
|