input value
bit representation
var a = new Uint16Array( [ 1 ] );
var str = toBinaryString( a[0] );
// returns '0000000000000001'
var a = new Uint16Array( [ 4 ] );
var str = toBinaryString( a[0] );
// returns '0000000000000100'
var a = new Uint16Array( [ 9 ] );
var str = toBinaryString( a[0] );
// returns '0000000000001001'
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Notes
number
values correspond to double-precision floating-point numbers. While this function is intended for unsigned 16-bit integers, the function will accept floating-point values and represent the values as if they are unsigned 16-bit integers. Accordingly, care should be taken to ensure that only nonnegative integer values less than65536
(2^16
) are provided.