1.36 Consider an N-bit ADC whose analog input varies between 0 and V FS (where the subscript FS denotes “full scale”). (a) Show that the least significant bit (LSB) corresponds to a change in the analog signal of VFS/(2N − 1). This is the resolution of the converter. (b) Convince yourself that the maximum error in the conversion (called the quantization error) is half the resolution; that is, the quantization error = VFS/2(2N −1). (c) For VFS = 5 V, how many bits are required to obtain a resolution of 2 mV or better? What is the actual resolution obtained? What is the resulting quantization error?