Use CyberRealistic epiCRealism Realistic Vision for better result
Use simple prompts. Complex prompts might make less realistic pictures because of CLIP bleeding. More complex prompts does not mean better results. Keep it simple.
Use ADetailer to enhance faces. Basically every solo portrait I made uses it. You can get my settings by clicking on "copy generation data". I suggest you use denoising under 0.3 to avoid getting always the same face.
Use BadDream and UnrealisticDream negative embeddings (BadDream, (UnrealisticDream:1.2)
). Add weight to UnrealisticDream between 1.2 and 1.5. Do not use FastNegative or EasyNegative if you aim at realism. However, they're good for artworks.
Use Highres.fix with the following settings: Denoising strength: 0.45, Hires steps: 20, Hires upscaler: 8x_NMKD-Superscale_150000_G
and as much upscale as you can (my gpu only handles up to x1.8 at 512x768 base resolution, but you can go higher). If you don't have 8x_NMKD-Superscale_150000_G
you can probably use another GAN, but it should be easy to find on Google. You can also try Latent with a denoise higher than 0.6, but the result will be harder to control.
Try to condition faces by prompting for eye colors, hairstyles, hair color, ethnicity and so on. Even celebrity names do work. This model is pretty good at not making a single face if you play with the context.
If the pic is too clean, try to add some ISO noise. Even as a post processing with external tools it will trick the brain enough to make you think "damn, this is a real photo.
If you feel the grounded nature of this model is limiting your imagination, try generating on DS6 and then do img2img with this one to bump up the realism.