0.9.9 API documentation
Functions
Integer functions

Provides GLSL functions on integer types. More...

Functions

template<typename genType >
GLM_FUNC_DECL int bitCount (genType v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > bitCount (vec< L, T, Q > const &v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldExtract (vec< L, T, Q > const &Value, int Offset, int Bits)
 Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldInsert (vec< L, T, Q > const &Base, vec< L, T, Q > const &Insert, int Offset, int Bits)
 Returns the insertion the bits least-significant bits of insert into base. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldReverse (vec< L, T, Q > const &v)
 Returns the reversal of the bits of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findLSB (genIUType x)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findLSB (vec< L, T, Q > const &v)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findMSB (genIUType x)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findMSB (vec< L, T, Q > const &v)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void imulExtended (vec< L, int, Q > const &x, vec< L, int, Q > const &y, vec< L, int, Q > &msb, vec< L, int, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > uaddCarry (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &carry)
 Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32). More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void umulExtended (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &msb, vec< L, uint, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > usubBorrow (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &borrow)
 Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise. More...
 

Detailed Description

Provides GLSL functions on integer types.

These all operate component-wise. The description is per component. The notation [a, b] means the set of bits from bit-number a through bit-number b, inclusive. The lowest-order bit is bit 0.

Include <glm/integer.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL int glm::bitCount ( genType  v)

Returns the number of bits set to 1 in the binary representation of value.

Template Parameters
genTypeSigned or unsigned integer scalar or vector types.
See also
GLSL bitCount man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::bitCount ( vec< L, T, Q > const &  v)

Returns the number of bits set to 1 in the binary representation of value.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitCount man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldExtract ( vec< L, T, Q > const &  Value,
int  Offset,
int  Bits 
)

Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result.

For unsigned data types, the most significant bits of the result will be set to zero. For signed data types, the most significant bits will be set to the value of bit offset + base - 1.

If bits is zero, the result will be zero. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL bitfieldExtract man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldInsert ( vec< L, T, Q > const &  Base,
vec< L, T, Q > const &  Insert,
int  Offset,
int  Bits 
)

Returns the insertion the bits least-significant bits of insert into base.

The result will have bits [offset, offset + bits - 1] taken from bits [0, bits - 1] of insert, and all other bits taken directly from the corresponding bits of base. If bits is zero, the result will simply be base. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitfieldInsert man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldReverse ( vec< L, T, Q > const &  v)

Returns the reversal of the bits of value.

The bit numbered n of the result will be taken from bit (bits - 1) - n of value, where bits is the total number of bits used to represent value.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitfieldReverse man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL int glm::findLSB ( genIUType  x)

Returns the bit number of the least significant bit set to 1 in the binary representation of value.

If value is zero, -1 will be returned.

Template Parameters
genIUTypeSigned or unsigned integer scalar types.
See also
GLSL findLSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::findLSB ( vec< L, T, Q > const &  v)

Returns the bit number of the least significant bit set to 1 in the binary representation of value.

If value is zero, -1 will be returned.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL findLSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL int glm::findMSB ( genIUType  x)

Returns the bit number of the most significant bit in the binary representation of value.

For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.

Template Parameters
genIUTypeSigned or unsigned integer scalar types.
See also
GLSL findMSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::findMSB ( vec< L, T, Q > const &  v)

Returns the bit number of the most significant bit in the binary representation of value.

For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL findMSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL void glm::imulExtended ( vec< L, int, Q > const &  x,
vec< L, int, Q > const &  y,
vec< L, int, Q > &  msb,
vec< L, int, Q > &  lsb 
)

Multiplies 32-bit integers x and y, producing a 64-bit result.

The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL imulExtended man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, uint, Q> glm::uaddCarry ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  carry 
)

Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32).

The value carry is set to 0 if the sum was less than pow(2, 32), or to 1 otherwise.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL uaddCarry man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL void glm::umulExtended ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  msb,
vec< L, uint, Q > &  lsb 
)

Multiplies 32-bit integers x and y, producing a 64-bit result.

The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL umulExtended man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, uint, Q> glm::usubBorrow ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  borrow 
)

Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise.

The value borrow is set to 0 if x >= y, or to 1 otherwise.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL usubBorrow man page
GLSL 4.20.8 specification, section 8.8 Integer Functions