input value
bit representation
var a = new Uint32Array( [ 1 ] );
var str = toBinaryString( a[0] );
// returns '00000000000000000000000000000001'
var a = new Uint32Array( [ 4 ] );
var str = toBinaryString( a[0] );
// returns '00000000000000000000000000000100'
var a = new Uint32Array( [ 9 ] );
var str = toBinaryString( a[0] );
// returns '00000000000000000000000000001001'
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Notes
number
values correspond to double-precision floating-point numbers. While this function is intended for unsigned 32-bit integers, the function will accept floating-point values and represent the values as if they are unsigned 32-bit integers. Accordingly, care should be taken to ensure that only nonnegative integer values less than4,294,967,296
(2^32
) are provided.