What is wrong with this proof? Let A be an nxn matrix with integer entries. Prove that A is nonsingular and A inverse has integer entries if and only if det(A) = +/-1
Given: Matrix A is an nxn matrix with integer entries. Proof: Since we are given a biconditional statement, we need to prove two claims: 1. If A is nonsingular and A inverse has integer entries, then det(A)=+/- 1 2. If det(A)=+/- 1, then A is nonsingular and A inverse has integer entries.
For the first claim, A is nonsingular and A^-1 = adj(A)/det(A) has integer entries. If A has integer entries, then the cofactors of A are integers and adj(A) has only integer entries. Therefore, 1/(det(A)) multiplied by each entry of adj(A) must be an integer. *[Furthermore, if adj(A) has integer entries, then 1/det(A) must be an integer.]* *My instructor wrote that there's no reason to conclude that. What does that mean? As a result, det(A) = +/-1 For the second claim, det(A) = +/- 1 occurs when A is nonsingular and A^-1 = +/- adj(A) have integer entries. Something is wrong on the first claim, but I can't figure out what's buggy about it. The asterisk is where the error is.
@agent0smith @UnkleRhaukus
@hartnn
@QuantumModulus
@joemath314159 @wio they're the only two people who can do proofs. I wish they were online more often.
@RONNCC can you help me? If it's possible...?
Its quite possible that adj(A) can have integer entries, 1/det(A) can still be a fraction, and the result is that A inverse has integer entries. For example, what if all the entries in adj(A) were even, and det(A) was 2? The way you want to tackle the first statement is by using the fact that:\[\det (A^{-1})=\frac{1}{\det (A)}\]If A has integer entries, then the det(A) must be an integer. If A inverse has integer entries, then det(A^-1) must also be an integer. What can you conclude from the above equation if both det(A) and det(A^-1) have to be integers?
We need an integer determinant. |dw:1365671143300:dw| the determinant of Aij is an integer for j...1,...,n. It's a (n-1) x (nx1) matrix. The sums of multiples are integers adj(A) = (-1)^j+1A1jdet(A1j) What about this @joemath314159
Join our real-time social learning platform and learn together with your friends!