When there are devices that have fixed sampling rates or cannot be slaved to the studio Word Clock then it becomes necessary to use a Sample Rate Converter. This is a can of worms. Many devices on the market are based on first generation ICs (such as the AD1890). Such technology causes a 4 bit (or 24dB) loss of signal/noise ratio i.e. 24 bit audio becomes 20 bit quality, 20 bit audio becomes 16 bit quality, etc. This loss is present even if the SRC is just shifting the phase to lock the signal at the same sample rate. Current third generation devices offer better performance, but there is always some loss of quality. It may be just more acceptable, but it is never zero.
It is a fallacy that 48kHz recordings are better than 44.1kHz. If the end result is going to be used on a CD the minor difference will be lost in the sample rate conversion.
Software conversion may also suffer the same quality loss as it is based on the same maths. Unlike hardware, no true specification or measurements are given.