Mathematics for Enzyme Reaction Kinetics and Reactor Performance. F. Xavier Malcata
Чтение книги онлайн.
Читать онлайн книгу Mathematics for Enzyme Reaction Kinetics and Reactor Performance - F. Xavier Malcata страница 70
![Mathematics for Enzyme Reaction Kinetics and Reactor Performance - F. Xavier Malcata Mathematics for Enzyme Reaction Kinetics and Reactor Performance - F. Xavier Malcata](/cover_pre848428.jpg)
because a left‐hand‐sided vector system would result in this case. Upon insertion of Eqs. (3.136)–(3.142), it becomes possible to redo Eq. (3.135) to
together with factoring out of jx, jy, and jz; the resulting vector, u × v, may appear in the alternative form
(3.144)
resorting to matrix notation, or equivalently
(3.145)
at the expense of the concept of determinant (both to be introduced later), combined with Eq. (1.9). One may instead write
(3.146)
– as alias of Eq. (3.143); similarly to Eq. (3.96), i stands for x (i = 1), y (i = 2), or z (i = 3), should the alternating operator, δijk, be defined by
(3.147)
Once in possession of Eqs. (3.19) and (3.143), one may revisit Eq. (3.128) as
(3.148)
where the distributive property of scalars allows transformation to
(3.149)
algebraic rearrangement at the expense of the commutative and associative properties of multiplication of scalars leads then to
(3.150)
so Eqs. (3.22) and (3.51) may be invoked to write
(3.151)
that retrieves Eq. (3.128) after applying Eq. (3.143) twice – thus confirming validity of Eq. (3.128), through an independent derivation path.
Finally, it is worth mentioning that the volume, V, of a parallelepiped defined by vectors u, v, and w can be calculated as the area of the parallelogram that constitutes its base, defined by u and v and represented by vector (‖ u ‖‖ v ‖ sin {∠ u , v }) n as per Eq. (3.112) and (3.113), multiplied by its height – i.e. the projection of w upon n, and calculated as ‖ w ‖ cos {∠ w , n } as per Eq. (3.54). On the one hand, (‖ u ‖‖ v ‖ sin {∠ u , v }) n is, by definition, equal to u × v as per Eq. (3.111) – so ‖ u ‖‖ v ‖ sin {∠ u , v } coincides with ‖ u × v ‖ because ‖ n ‖ = 1; on the other hand, the length of the projection of w onto n reads ‖ w ‖ cos {∠ w , n }, where cos{∠ w , n } = cos {∠ w , u × v } since u × v has the direction of n . The product of ‖ u × v ‖ by ‖ w ‖ cos {∠ w , u × v } is but the scalar product of u × v by w as per Eq. (3.54) – so one finally concludes that
(3.152)
with a scalar quantity being now at stake.
4 Matrix Operations
Matrix is a nuclear concept in linear algebra; arrays of (real) numbers possess a long history associated to solution of linear equations – and records indicate that Italian mathematician Girolamo Cardano first brought a related method from China to Europe in 1545, using his book Ars Magna as vehicle. The first explicit mention to a matrix appeared in 1851 by the hands of James J. Sylvester, an English mathematician – although in the context of determinants. Since he was interested in the determinant formed from a rectangular array of numbers and not in the array itself, he coined the word matrix from the Latin mater meaning womb (i.e. the place from which something else originates); it remained up to his collaborator Arthur Cayley to ascribe the modern sense to the concept of matrix.
Being an array of numbers, arranged as m rows × n columns, and enclosed by a set of square parenthesis, [ai,j] with i=1,2,…,m and j=1,2,…,n, a real matrix actually originates from Rm×n . It is termed rectangular when m≠n, and square when m=n; and reduces to a row vector when m = 1, or a column vector when n = 1. The main diagonal is formed by elements of the type ai,i; if all entries below the main diagonal are zero, the matrix is said to be upper triangular and lower triangular when all entries above the main diagonal are nil. A diagonal matrix is both upper and lower triangular, i.e. all elements off the main diagonal are zero; if all elements in the diagonal are, in turn, equal to each other, then a scalar matrix arises.