[Q] Suggested methodology to generated simulated data from an existing dataset

I’m working on a project where I have a real dataset with a complex underlying structure . As part of my work, I am running a simulation study to test the validity of different methods which exist in the literature for working with this dataset. The goal is essentially to see how each method performs if the assumptions underlying a different method are the actual true assumptions.

To do this, I would like to generate simulated datasets which have the same underlying structure of the original dataset. I have thought of 2 possible approaches

1. Add a small amount of random noise to the covariates for each simulation run to generate datasets with the same general underlying structure which are not identical
2. Do some kind bootstrap sampling to generate simulated datasets with the same underlying structure

Does anyone have experience using either of these approaches , or can point me to papers outlining valid approaches for this kind of challenge?