Conventional methods of real time sound effects in 3D graphical and virtual environments relied upon preparing all the needed samples ahead of time and simply replaying them as needed, or parametrically modifying a basic set of samples using physically based techniques such as the spring-damper simulation and modal analysis/synthesis. In this work, we propose to apply the generative adversarial network (GAN) approach to the problem at hand, with which only one generator is trained to produce the needed sounds fast with perceptually indifferent quality. Otherwise, with the conventional methods, separate and approximate models would be needed to deal with different material properties and contact types, and manage real time performance. We demonstrate our claim by training a GAN (more specifically WaveGAN) with sounds of different drums and synthesizing the sounds on the fly for a virtual drum playing environment. The perceptual test revealed that the subjects could not discern the synthesized sounds from the ground truth nor perceived any noticeable delay upon the corresponding physical event.