Tag Archives: Proofs

Complex Analysis Assignment 1

My first complex analysis assignment has been marked and returned. I don't think I've ever felt the urge quite so much to learn from my mistakes.

Consequently there has been quite a lot of post-assignment learning... :/

This assignment featured a very brief introduction to complex numbers as a refresher, then broadly covered complex functions, the concept of continuity and complex differentiation.

So in no particular order, below are some notes on mistakes I made and how I could've avoided them! There's a lot to reflect on here...


Read questions carefully. One of the first very simple questions read "express z in polar form and determine all fourth roots". I did the second bit, but not the first.


I feel this is a bit "Complex Numbers 101", but the square root sign is defined as the principal square root (of a complex number), i.e. there's no need to calculate the second root.


If you're using the triangle inequality, state it specifically.


Again, this is fairly "Complex Numbers 101", but the polar form of a complex number isn't just a cosine function as the real part, and a sine function as the imaginary part. The arguments to both functions must be identical to qualify as "polar form". ie, you should be able to write the complex number as an exponential form.


Top tip: Be mindful about using identities. In complex analysis there are loads of them and they help a great deal.


When working out the inverse of a complex function, it's important to use your common sense. Part of one inverse I'd calculated had a square root in it. Just by looking at that, you know it could never produce a unique answer (it isn't a one-to-one function).


For another, I had to find the inverse of \text{Log}(3z) and the domain of that inverse. I got this spectacularly wrong. I'd written: given w=\text{Log}(3z), hence z=e^{3w}.

Trick here was to exponentiate each side, leading to e^{w}=3z. But the domain of the inverse isn't affected by the "3" above, the image set of the original function is still \{z: -\pi <\text{Im}z \leq \pi\}.


Some complex functions are very very different to their real equivalents. Case in point: \text{cosh}(x)\neq 0 , \forall x \in \mathbb{R}, but \exists\: z \in \mathbb{C}\: \text{s.t.}\: \text{cosh}(z)=0. Which leads to the next note:


If \text{cosh}(z) is the divisor in a complex quotient, you need to show that it's only 0 for values outside of the given range of the equation (eg |z|<1).


For one question, I had to prove that f(z)=z^{i},\:\: (\text{Re}\:z>0) was continuous. I thought this was easy.

z^{\alpha},\: \alpha \in \mathbb{C} is a basic continuous function on \mathbb{C}-\{x\in\mathbb{R} : x \leq 0\}. So if you let \alpha=i, then f(z) is continuous, right?

Not quite. I had entirely forgotten to state that the given set (\text{Re}\:z>0) is a subset of the set I gave: \mathbb{C}-\{x\in\mathbb{R} : x \leq 0\}.

The answer can appear obvious sometimes, but you have to keep your answer rigorous, otherwise you risk losing half marks or whole marks here and there.


Note:
z^{\alpha} = e^{\alpha Log(z)}
z^{\alpha} \neq e^{z Log(\alpha)}
🙁


For one question I had to prove whether a set was a region or not. For reference, a region is a non-empty, connected, open subset of \mathbb{C}. In the usual manner, if you can prove that any of those three properties don't hold then you've managed to prove that your set isn't a region. Easy.

I realised I could prove a set was closed, and hence not a region. Turns out this was incorrect. A set being "closed" and a set being "not open" hold two completely different definitions, and are seen as different things. I was meant to show it was "not open" as opposed to showing it was "closed".

In other words, mathematically:

Closed is not the same as not-open.
Closed is not the opposite of open.
Not-open is the opposite of open.


Again, here I needed to provide a proof based on the properties of various objects. Given a set that was compact (closed and bounded), I needed to prove that a function f was bounded on that set.

The Boundedness Theorem states that if a function is continuous on a compact set, then that function is bounded on that set.

The function was: f(z) = \frac{1}{7z^{7}-1}

I proved that the given function was continuous on it's domain, but I'd failed to prove it was continuous on the set. Here, I needed to show where the function was undefined, THEN show that those points at which it was undefined all lay outside of the set. So there was quite a lot of work I missed out from this answer.


My simultaneous requirement for the Cauchy-Riemann theorem, AND the Cauchy-Riemann Converse theorem within a proof ended up not flowing very well logically. Once again, I'd jumped ahead with my logic. As soon as I had seen something obvious, I felt the urge to state it immediately.

The Cauchy-Riemann theorem proves that a function is not differentiable at certain points. The Converse theorem then proves that a function IS differentiable on certain points. After using the Cauchy-Riemann theorem, it was extremely obvious where the function was differentiable, so I stated it. Then, as a matter of course, plodded through the Converse theorem to prove it. Complete lack of discipline! 🙂

Complex Functions: Domains, Image Sets and Inverses

I can imagine having to refer to these notes regularly, so I'm putting them here!

Image Sets

  1. State the domain A of f(z)=w.
  2. Rearrange so w is a function of z (to discover the condition under which w remains valid.

e.g., for f(z)=\frac{1}{z-1}:

    \begin{align*} f(A) =&\: \left\{ \frac{1}{z-1}\::\:z\in\mathbb{C}-\{1\}\right\} \\ f(A) =&\: \left\{w=\frac{1}{z-1}\::\:z\neq 1\right\} \\ f(A) =&\: \left\{w\::\: z=\frac{1}{w}+1\::\:z\neq 1\right\} \\ f(A) =&\: \{w\::\: w\neq 0\} \\ f(A) =&\: \mathbb{C}-\{0\} \\ \end{align*}


Domain of Combined Functions

Domain of combined functions are the intersection (A\cap B) of the domains of all component functions and that of the combined function. e.g:

f(z)=\frac{z-1}{z}\:,\:\: z\in\mathbb{Z}-\{0\}

g(z)=\frac{z}{z-1}\:,\:\: z\in\mathbb{Z}-\{1\}

\frac{f\left(z\right)}{g\left(z\right)} = \frac{z^{2}-2z+1}{z^{2}} \:,\:\: z\in\mathbb{Z}-\{0,\:1\}


Domain of Composite Functions

For f and g with domains A and B respectively, the domain of g\circ f is:

A-\{z\::\: f(z)\: \notin\: B\}

e.g., for:

f(z)=\frac{z-1}{z}\:,\:\: z\in\mathbb{C}-\{0\}

g(z)=\frac{z}{z-1}\:,\:\: z\in\mathbb{C}-\{1\}

\text{domain of }f\circ g = \text{domain of }g - \{z\::\:\frac{z}{z-1} \:\notin\: \mathbb{C}-\{0\}\}

\text{domain of }f\circ g = (\mathbb{C}-\{1\} ) - \{z\::\:\frac{z}{z-1} =0\}

\text{domain of }f\circ g = (\mathbb{C}-\{1\} ) - \{0\}

\text{domain of }f\circ g = (\mathbb{C}-\{0,\:1\} )


Inverses

  1. Determine image set of f(z)=w.
  2. Invert f(z) to find a unique z in the domain of f.

For f(z)=\frac{1}{z-1}

f(A) = \{\frac{1}{z-1}\::\:z\in\mathbb{C}-\{1\}\}

f(A) = \{w=\frac{1}{z-1}\::\:z\:\neq\: 1\}

f(A) = \{w\::\: z=\frac{1}{w}+1\:\neq\: 1\}

f(A) = \{w\::\: w\:\neq\: 0\}

f(A) = \mathbb{C}-\{0\}

(all same as above for finding an image set)

z=\frac{1}{w}+1 gives a unique soluition in \mathbb{C}-\{0\}, hence f has a unique inverse rule:

f^{-1}(z)=\frac{1}{w}+1\:,\:\:\:z\in\mathbb{C}-\{0\}

Feedback - 01

I received the marks back for my first monster assignment! Did quite well as it turns out! But this blog isn't about spouting about my success, it's about the learning process! So here's some of the things I screwed up...

First off, my algebra is clearly rusty as fuck. In one instance put a minus sign in the wrong place AND mysteriously lost a factor of 2 in the progress of my working. In future I really need to re-read my working really carefully (three or four times over it seems), both the hand-written and the full typed-up LaTeX...

Something else I lost marks for was the apparently simple task of graph sketching, either where I hadn't considered asymptotes or had not considered the limits of the domain. Overall I clearly need to be a lot more mindful of whether I'm dealing with \leq or <. When I read those symbols I see them both so often, I frequently gloss over them without properly considering their usage. Again, pretty basic stuff.

With complex numbers I apparently need to be more explicit with my declaration of forms. My polar form was implicit in the answer, but there wasn't anywhere I actually stated it. Silly boy.

I fell down on a proof of symmetry for an equivalence relation. I just wasn't mindful whilst answering this. It is assumed that x-3y=4n. This can be rearranged in terms of y as y=\frac{x}{3}-\frac{4n}{3}. So substituting y, in the symmetrical y-3x results in: 4 \left(-\frac{2x}{3}-\frac{2n}{3}\right). Of course, at this point, proving that what's inside the brackets is an integer is pretty difficult. But that's where I left it. A bit more play would've shown that I could easily have arranged the first equation in terms of x instead which would've resulted in 4\left( -2y-3k\right), which is rather obviously an integer given the initial variables. More exploration required in future...

Lastly, in my last post I mentioned how there was a distinct lack of symbolic existential or universal quantifiers in all this new material. After Velleman, I was so used to seeing them, and working with them appropriately but because they're now not around, I got totally burnt by assuming I had to prove "there exists" instead of "for all" for one question. I suppose I'll be able to get around this with making sure my notes explicitly state whatever quantifier we're actually talking about. Damned English language... Symbols are much more concise! 🙂

Large Intro

Finally submitted my first assignment. It was monstrous. Just over 23 pages of mathematics and sketches of graphs. All of it typed up in LaTeX. Skipping ahead to look at the rest of the assignments, it looks as if this first assignment may very well be the biggest of the whole lot. This is a very good thing as I really don't think I could churn out that much work of a high quality every month.

Glad to say that most of this introduction section I was familiar with. Only really new topic was equivalence relations, which caused some problems initially.

Overall though, what I've found difficult is the apparent lack of logical notation. After reading "How To Prove It" I've become half-decent at making sense of and rearranging logical notation to solve a problem. The difficulty comes in looking at the plain-English description of something in the texts and then having to translate it into logical notation to allow my fussy brain to think about them logically.

Perfect example of this is the definition of a function being "onto". In the text, the definition reads:

"A function f: A \longrightarrow B is onto if f(A)=B".

Which is fine, but the Wikipedia definition reads:

"\forall y \in Y, \exists x \in X such that y = f(x)"

Which for me, gives me a much better idea about how to go about proving if a function is onto. Why leave out the quantifiers? The Wikipedia definition tells me so much more. I suppose translating English into logical notation is just something I'll have to get good at!

Though even after this long intro section, I really feel I need more practise with proofs... I guess this may have to wait until revision time... Next up is the first section on group theory, with an assignment due on November 24th. Onward.

PENS DOWN!

So that's it! The Summer has ended!

How far has my extra study got me? Well I've managed to get through around 120 pages of "How To Prove It" by Velleman, and have generated just over 70 double-sided pages of A4's worth of exercises from the book. Not bad for an extra-curricular topic!

This book has helped me loads. It's succeeded in taking away a lot of the mystery involved in reading and writing proofs.

Every topic in the book up until now has flowed well, and allowed me to think about solutions to the various problems fairly naturally. What I mean by that is, I never became absolutely stuck and unable to answer a question.

Having said that, the sub-topic I'm finishing on is proofs involving quantifiers. This is the one area in which I'll admit I've been struggling. At this point in the book, I've learned so much about the number of ways in which to decon/reconstruct a problem that any possible method by which to prove a theorem has actually become less obvious.

Here's an example of how convoluted the scratch work of a basic proof has become. Here's question 14 from p.122:

Suppose  \{A_i\: |\: i \in I\} is an indexed family of sets. Prove that \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i).

It's a short question, but this immediately looks like a nightmare to a beginner like myself. We've got a mix of indexed sets, a union over them, and power sets.

First off I need to properly understand the damn thing. Seems sensible to draw up an example using the theorem...

Let's say I is \{1,2\}. So we've got \{A_{1},A_{2}\}.

Now let's say that A_1 = \{1,2\} and A_2 = \{2,3\}.

Looking at the LHS of the thing I need to prove, it's actually pretty easy to break down:

\mathscr{P} (A_1) = \{ \varnothing, \{1\}, \{2\}, \{1,2\}\}

\mathscr{P} (A_2) = \{ \varnothing, \{2\}, \{3\}, \{2,3\}\}

Which means that the union of all the elements of the power sets of A_i is:

\cup_{i \in I} \mathscr{P} (A_i) = \{ 1,2,3\}

A little half-way recap: the theorem says that in my example, \{1,2,3\} should be equal to, or be a subset of \mathscr{P}(\cup_{i \in I} A_i) (the RHS).

Let's see if that's true shall we?

Within the parenthesis of the RHS we've got \cup_{i \in I} A_i. So this is the union of all elements of all indexed sets. In this example:

\cup_{i \in I} A_i = \{1,2,3\}

Only thing missing now is the power set of this:

\mathscr{P}(\cup_{i \in I} A_i) = \{ \varnothing, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}

And there we go. Now I understand exactly what the theorem means. \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i), in this specific example, turns out to be :

\{1,2,3\} \subseteq \{ \varnothing, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}

which is obviously true. Theorem understood. Achievement unlocked. Tick.

As recommended by Velleman, I'll also try to construct the phrasing of the proof along-side the scratch work as I go. Like so:

Suppose a thing is true that will help to prove the theorem.

[Proof of theorem goes here]

Thus, we've proved the theorem.

Okay, let's start.

The theorem means that if x \in \cup_{i \in I} \mathscr{P} (A_i) then x \in \mathscr{P}(\cup_{i \in I} A_i). So one thing implies the other. We can then class \cup_{i \in I} \mathscr{P} (A_i) as a "given", and aim to prove \mathscr{P}(\cup_{i \in I} A_i) as a "goal". So blocking out our answer:

Suppose that x \in \cup_{i \in I} \mathscr{P} (A_i).

[Proof of x \in \mathscr{P}(\cup_{i \in I} A_i) goes here]

Thus, \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i).

So let's start analysing \cup_{i \in I} \mathscr{P} (A_i), remembering not to go too far with the logical notation. With baby steps, the definition of a union over a family of sets (here, the outer-most part of the logic) is:

\exists i \in I (x \in \mathscr{P}(A_i))

Then, going one step further, using the definition of a power set:

\exists i \in I (x \subseteq A_i)

Now we could go further at this point, applying the definition of a subset, but I'll stop the logical deconstruction here. In this instance, I've found that if I keep going so the entire lot is broken down into logical notation it somehow ends up getting a bit more confusing that it needs to be.

With this as our given, I notice the existential quantifier. Here, I can use "existential instantiation" to plug any value I want into i and then assume that what follows is true. So at this point the new "given" is simply:

x \subseteq A_i

Nice and simple.

Let's update the outline of our proper proof answer:

Suppose that x \in \cup_{i \in I} \mathscr{P} (A_i).

Let i \in I be such a value that x \subseteq A_i.

[Proof of x \in \mathscr{P}(\cup_{i \in I} A_i) goes here]

Thus, \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i).

So let's now move on to our "goal" that we have to prove: \mathscr{P}(\cup_{i \in I} A_i).

Again, starting from the outside, going in,

x \in \mathscr{P}(\cup_{i \in I} A_i)

by the definition of a power set becomes:

x \subseteq \cup_{i \in I} A_i

I can't do a lot with this on it's own so I'll keep going with the logical deconstruction. By the definition of a subset, this becomes:

\forall a (a \in x \to a \in A_i)

So now, using "universal instantiation" I can say that here, a is arbitrary (for the sake of argument, it really can be anything), and that leaves us with an updated "givens" list of:

x \subseteq A_i
and
a \in x

and a new "goal" of

a \in A_i

Hey, but wait a sec... Look at our "givens"! If a is in x... and x is a subset of A_i, then a must be in A_i! -and that's our goal!

So update our proof:

Suppose that x \in \cup_{i \in I} \mathscr{P} (A_i).

Let i \in I be such a value that x \subseteq A_i.

Let a be an arbitrary element of x.

[Proof of a \in A_i goes here]

Therefore a \in \cup_{i \in I} A_i. As a is arbitrary, we can conclude that x \in \mathscr{P}(\cup_{i \in I} A_i).

Thus, \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i).

So let's wrap this up.

Theorem:
Suppose  \{A_i\: |\: i \in I\} is an indexed family of sets. Prove that \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i).

Proof:
Suppose that x \in \cup_{i \in I} \mathscr{P} (A_i). Let i \in I be such a value that x \subseteq A_i, and let a be an arbitrary element of x. But if a \in x and x \subseteq A_i, then a \in A_i. Therefore a \in \cup_{i \in I} A_i. As a is arbitrary, we can conclude that x \in \mathscr{P}(\cup_{i \in I} A_i). Thus, we conclude that \cup_{i \in I} \mathscr{P} (A_i) \subseteq \mathscr{P}(\cup_{i \in I} A_i). Q.E.D.

Overall the task has involved unravelling the symbols into logic, making sure they flow together, and then wrapping them back up again.

See what I mean by convoluted? All that work for that one short answer. I must admit, I still don't know if my reasoning is 100% correct with this. Despite some parts of this seeming simple, this really is the very limit of what I'm capable of understanding at the moment. I picked this example to write up, as so far I've found it to be one of the most complicated.

The next section of the book seems to marry this quantifier work with another previous section about conjunctions and biconditionals, that I found to be quite enjoyable at the time. Then towards the end of the chapter, Velleman seems to sneak in some further proof examples using the terms epsilon and delta. I imagine this is a sneaky and clever way to get the reader comfortable with further Analysis study...

Alas, my study of Velleman's book will have to stop here. I understand a lot more than I did, though not everything there is to know. I feel it may be enough to give me a slightly smoother ride through my next module, which was the whole point of me picking this book up. It's been so good, I hope I have a chance to return to it. I feel later chapters would put me in an even better position for further proof work!

For now... the countdown begins for the release of my next module's materials!

Things you need to be told at the beginning

These quotes are from pages 89 and 90 of Velleman's "How To Prove It". If only I'd read all this when I was first introduced to a proof, I wouldn't have been so stressed!

"When mathematicians quote proofs, they usually just write the steps needed to justify their conclusions with no explanation of how they thought of them."

"Although this lack of explanation sometimes makes proofs hard to read, it serves the purpose of keeping two distinct objectives separate: explaining your thought processes and justifying your conclusions."

"The primary purpose of a proof is to justify the claim that the conclusion follows from the hypotheses, and no explanation of your thought processes can substitute for adequate justification of this claim. Keeping any discussion of thought processes to a minimum in a proof helps to keep this distinction clear."

"Don't worry if you don't immediately understand the strategy behind the proof you are reading".

I could hug this book right now.

End of the Quantifiers

A month and a week, and I've just come to the end of the second chapter. Reasonably happy with the progress, but I could be going a bit quicker... Mind you, over just two chapters I've now created 46 pages of A4 of exercises. So there has been a LOT of material to go through. Frankly, just these first two chapters have worked wonders for my understanding of logic and what proofs are founded upon.

This second chapter mainly introduced quantifiers. The concept of "for all x" and "there exists at least one x...", but quickly branched off into more involved set theory.

The biggest issue I had towards the end of the second chapter was that on a couple of occasions, I don't think I thought carefully enough about the kind of answer the questions required. ie: in this context, whether the answer was required in logical notation, or whether it was required in set theory notation. Translating between the two is something I certainly found tricky. As such, I decided to write my own definitions of notation in the form of a list (thanks Lara Alcock!). Though the lack of lists of definitions could be considered a slight shortfall of the book, I think I benefited from constructing my own notes and definitions.

I found that towards the end of the questions (because of the more lengthy logical notation) I was concentrating more on the definitions than what the notation actually meant. Not convinced this is so good for the learning process, but at least I'm mindful of it now.

Last little question the second chapter covered was Russell's Paradox, as discovered by Bertrand Russell in 1901. The fact that I'm being introduced to stuff like this in the second chapter is pretty cool. Very enjoyable!

Next up, proof technique!

The Joy Of Sets

54 pages, and 5 large exercise sections later, I've finally finished the first chapter of "How To Prove It". With the first chapter being about sentential logic, I've now covered truth tables, derivations of logical operations, set theory, and the conditional and bi-conditional connectives.

The next chapter covers further foundational logical concepts and only in Chapter 3 are the intricacies of actual proofs discussed. Having taken this long to cover the first chapter, and looking at the amount of paper I've used to do all the exercises so far, I'm not that surprised I was finding proofs so difficult. It turns out my intuition was right, I was missing a lot of foundational knowledge.

So far, it's all been going well. Nothing I've looked at in this first chapter has left me mystified and overall I feel like I'm learning. This is exactly where I wanted to be... Just need to up the pace, perhaps...

Books For Understanding Books - Part 2

So this is getting ridiculous. I know, I can only apologise. I'll write some maths on here at some point, I promise.

It turned out that the super-valuable forum post I had on the OU forums has now been deleted. Apparently if a post isn't pinned it gets auto-nuked after two months. So now all that valuable information is gone.

But let's not dwell on it. Especially when there's a new book looming!!!!!

howToProveIt

Now, despite the fact that I've read Lara Alcock's books about how to learn Analysis, and started Brannan's Analysis book (see earlier posts) I realised I was missing more foundation-level knowledge. How To Prove It by Daniel J. Velleman looks like it'll be the book to give it to me.  I remember it as being recommended on the deleted forum post, and the reviews generally are very very positive.

Already I've come to the end of the first (admittedly short) section and I can actually attempt all of the exercises! I totally understand everything he's saying and I really feel like I'm learning something with every page. At last!

More of a proper review of this on the way, but for the moment I'll be nose-deep in this for the next couple of months...

Readjusting Learning Methodologies

I've just finished reading Lara Alcock's book on how to learn about Analysis. Or rather, I've finished reading the first part, and the part on the real number system.

Overall, the book has led me to reconsider my current learning technique. So much so, I've compiled a list of steps to follow depending on whether I read about a new definition, theorem or proof.

In turn, this has made me realise I may benefit from starting my main book on Analysis again from the beginning, but applying these new steps as I go. After all, I am still only at the beginning (kind of), and I don't have any kind of deadline looming over me (which is really nice). Overall it seems like the perfect opportunity to try out some new learning methodologies!

Out of the handful of additions, there are two really big changes for me.

The first being mind maps. As I go through my Analysis, I'll be creating a mind map of concepts, seeing how one builds on another. I'd tried to use mind maps before at university and they'd largely proved completely useless. Here, however, mind maps appear to offer a perfect way to visualise the building of concepts into larger concepts. Here's the beginning of my first mind map!

mindMapExample2

I'm using draw.io as the tool of choice. It seems flexible enough for what I need it for, it can save as XML, and it supports mathematical notation! (mathjax latex formatting I believe)

The second big change to my learning involves learning by self-explanation. This technique, mentioned in Lara Alcock's book, appears to be one of the key processes involved in truly understanding and appreciating Analysis. You can find out more about self-explanation training from Loughborough University's Mathematics Education Centre website. My difficulty here will lie in concentrating on actually doing self-explanation, rather than just paraphrasing (turns out, it's a very easy trap to fall in). So long as I do it regularly enough, self-explanation will be more likely to come to me naturally.

Expected result: More effective learning and better notes!