# Complex Analysis Assignment 1

My first complex analysis assignment has been marked and returned. I don't think I've ever felt the urge quite so much to learn from my mistakes.

Consequently there has been quite a lot of post-assignment learning... :/

This assignment featured a very brief introduction to complex numbers as a refresher, then broadly covered complex functions, the concept of continuity and complex differentiation.

So in no particular order, below are some notes on mistakes I made and how I could've avoided them! There's a lot to reflect on here...

Read questions carefully. One of the first very simple questions read "express in polar form and determine all fourth roots". I did the second bit, but not the first.

I feel this is a bit "Complex Numbers 101", but the square root sign is defined as the principal square root (of a complex number), i.e. there's no need to calculate the second root.

If you're using the triangle inequality, state it specifically.

Again, this is fairly "Complex Numbers 101", but the polar form of a complex number isn't just a cosine function as the real part, and a sine function as the imaginary part. The arguments to both functions must be identical to qualify as "polar form". ie, you should be able to write the complex number as an exponential form.

Top tip: Be mindful about using identities. In complex analysis there are loads of them and they help a great deal.

When working out the inverse of a complex function, it's important to use your common sense. Part of one inverse I'd calculated had a square root in it. Just by looking at that, you know it could never produce a unique answer (it isn't a one-to-one function).

For another, I had to find the inverse of and the domain of that inverse. I got this spectacularly wrong. I'd written: given , hence .

Trick here was to exponentiate each side, leading to . But the domain of the inverse isn't affected by the "3" above, the image set of the original function is still .

Some complex functions are very very different to their real equivalents. Case in point: , but . Which leads to the next note:

If is the divisor in a complex quotient, you need to show that it's only 0 for values outside of the given range of the equation (eg ).

For one question, I had to prove that was continuous. I thought this was easy.

is a basic continuous function on . So if you let , then is continuous, right?

Not quite. I had entirely forgotten to state that the given set is a subset of the set I gave: .

The answer can appear obvious sometimes, but you have to keep your answer rigorous, otherwise you risk losing half marks or whole marks here and there.

Note:

š

For one question I had to prove whether a set was a region or not. For reference, a region is a non-empty, connected, open subset of . In the usual manner, if you can prove that any of those three properties don't hold then you've managed to prove that your set isn't a region. Easy.

I realised I could prove a set was closed, and hence not a region. Turns out this was incorrect. A set being "closed" and a set being "not open" hold two completely different definitions, and are seen as different things. I was meant to show it was "not open" as opposed to showing it was "closed".

In other words, mathematically:

Closed is not the same as not-open.
Closed is not the opposite of open.
Not-open is the opposite of open.

Again, here I needed to provide a proof based on the properties of various objects. Given a set that was compact (closed and bounded), I needed to prove that a function was bounded on that set.

The Boundedness Theorem states that if a function is continuous on a compact set, then that function is bounded on that set.

The function was:

I proved that the given function was continuous on it's domain, but I'd failed to prove it was continuous on the set. Here, I needed to show where the function was undefined, THEN show that those points at which it was undefined all lay outside of the set. So there was quite a lot of work I missed out from this answer.

My simultaneous requirement for the Cauchy-Riemann theorem, AND the Cauchy-Riemann Converse theorem within a proof ended up not flowing very well logically. Once again, I'd jumped ahead with my logic. As soon as I had seen something obvious, I felt the urge to state it immediately.

The Cauchy-Riemann theorem proves that a function is not differentiable at certain points. The Converse theorem then proves that a function IS differentiable on certain points. After using the Cauchy-Riemann theorem, it was extremely obvious where the function was differentiable, so I stated it. Then, as a matter of course, plodded through the Converse theorem to prove it. Complete lack of discipline! š

# Complex Functions: Domains, Image Sets and Inverses

I can imagine having to refer to these notes regularly, so I'm putting them here!

# Image Sets

1. State the domain of .
2. Rearrange so is a function of (to discover the condition under which remains valid.

e.g., for :

# Domain of Combined Functions

Domain of combined functions are the intersection () of the domains of all component functions and that of the combined function. e.g:

# Domain of Composite Functions

For and with domains and respectively, the domain of is:

e.g., for:

# Inverses

1. Determine image set of .
2. Invert to find a unique in the domain of .

For

(all same as above for finding an image set)

gives a unique soluition in , hence has a unique inverse rule:

# Pumping Lemma

Way way off the beaten path here, but this is the best example of usage of the pumping lemma I've seen. Just need somewhere to put it... The below is taken from here. Theorem: Let be a regular language, and be a string. Then there exists a constant s.t. . We can break into three strings, , s.t.:
Method to prove that a language is not regular:
• At first, we have to assume that is regular.
• So, the pumping lemma should hold for .
• Use the pumping lemma to obtain a contradiction:
• Select s.t. .
• Select s.t. .
• Select s.t.
• Assign the remaining string to .
• Select s.t. the resulting string is not in .
Problem: Prove that is not regular. Solution:
• At first, we assume that is regular and is the number of states.
• Let . Thus .
• By the pumping lemma, let , where .
• Let ,Ā , andĀ , where , ,Ā ,Ā . Thusly .
• Let . Then .
• Number of .
• Hence, . Since , is not of the form !!!!!!!!!!!!!!!
• Thus, . Hence is not regular.

# Feedback - 01

I received the marks back for my first monster assignment! Did quite well as it turns out! But this blog isn't about spouting about my success, it's about the learning process! So here's some of the things I screwed up...

First off, my algebra isĀ clearly rusty as fuck. In one instance put a minus sign in the wrong place AND mysteriously lost a factor of 2 in the progress of my working. In future I really need to re-read my working really carefully (three or four times over it seems), both the hand-written and the full typed-up LaTeX...

Something else I lost marks for was the apparently simple task of graph sketching, either where I hadn't considered asymptotes or had not considered the limits of the domain. Overall I clearly need to be a lot more mindful of whether I'm dealing with or . When I read those symbols I see them both so often, I frequently gloss over them without properly considering their usage. Again, pretty basic stuff.

With complex numbers I apparently need to be more explicit with my declaration of forms. My polar form was implicit in the answer, but there wasn't anywhere I actually stated it. Silly boy.

I fell down on a proof of symmetry for anĀ equivalence relation. I just wasn't mindful whilst answering this. It is assumed that . This can be rearranged in terms of as . So substituting y, in the symmetrical results in: . Of course, at this point, proving that what's inside the brackets is an integer is pretty difficult. But that's where I left it. A bit more play would've shown that I could easily have arranged the first equation in terms of instead which would've resulted in , which is rather obviously an integer given the initial variables. More exploration required in future...

Lastly, in my last post I mentioned how there was a distinct lack of symbolic existential or universal quantifiers in all this new material. After Velleman, I was so used to seeing them, and working with them appropriately but because they're now not around, I got totally burnt by assuming I had to prove "there exists" instead of "for all" for one question. I suppose I'll be able to get around this with making sure my notes explicitly state whatever quantifier we're actually talking about. Damned English language... Symbols are much more concise! š

# Large Intro

Finally submitted my first assignment. It was monstrous. Just over 23 pages of mathematics and sketches of graphs. All of it typed up in LaTeX. Skipping ahead to look at the rest of the assignments, it looks as if this first assignment may very well be the biggest of the whole lot. This is a very good thing as I really don't think I could churn out that much work of a high quality every month.

Glad to say that most of this introduction section I was familiar with. Only really new topic was equivalence relations, which caused some problems initially.

Overall though, what I've found difficult is the apparent lack of logical notation. After reading "How To Prove It" I've become half-decent at making sense of and rearranging logical notation to solve a problem. The difficulty comes in looking at the plain-English description of something in the texts and then having to translate it into logical notation to allow my fussy brain to think about them logically.

Perfect example of this is the definition of a function being "onto". In the text, the definition reads:

"A function is onto if ".

Which is fine, but the Wikipedia definition reads:

" such that "

Which for me, gives me a much better idea about how to go about proving if a function is onto. Why leave out the quantifiers? The Wikipedia definition tells me so much more. I suppose translating English into logical notation is just something I'll have to get good at!

Though even after this long intro section, I really feel I need more practise with proofs... I guess this may have to wait until revision time... Next up is the first section on group theory, with an assignment due on November 24th. Onward.

# PENS DOWN!

So that's it! The Summer has ended!

How far has my extra study got me? Well I've managed to get through around 120 pages of "How To Prove It" by Velleman, and have generated just over 70 double-sided pages of A4's worth of exercises from the book. Not bad for an extra-curricular topic!

This book has helped me loads. It's succeeded in taking away a lot of the mystery involved in reading and writing proofs.

Every topic in the book up until now has flowed well, and allowed me to think about solutions to the various problems fairly naturally. What I mean by that is, I never became absolutely stuck and unable to answer a question.

Having said that, the sub-topic I'm finishing on is proofs involving quantifiers. This is the one area in which I'll admit I've been struggling. At this point in the book, I've learned so much about the number of ways in which to decon/reconstruct a problem that any possible method by which to prove a theorem has actually become less obvious.

Here's an example of how convoluted the scratch work of a basic proof has become. Here's question 14 from p.122:

SupposeĀ  is an indexed family of sets. Prove that .

It's a short question, but this immediately looks like a nightmare to a beginner like myself. We've got a mix of indexed sets, a union over them, and power sets.

First off I need to properly understand the damn thing. Seems sensible to draw up an example using the theorem...

Let's say is . So we've got .

Now let's say that and .

Looking at the LHS of the thing I need to prove, it's actually pretty easy to break down:

Which means that the union of all the elements of the power sets of is:

A little half-way recap: the theorem says that in my example, should be equal to, or be a subset of (the RHS).

Let's see if that's true shall we?

Within the parenthesis of the RHS we've got . So this is the union of all elements of all indexed sets. In this example:

Only thing missing now is the power set of this:

And there we go. Now I understand exactly what the theorem means. , in this specific example, turns out to be :

which is obviously true. Theorem understood. Achievement unlocked. Tick.

As recommended by Velleman, I'll also try to construct the phrasing of the proof along-side the scratch work as I go. Like so:

Suppose a thing is true that will help to prove the theorem.

[Proof of theorem goes here]

Thus, we've proved the theorem.

Okay, let's start.

The theorem means that if then . So one thing implies the other. We can then class as a "given", and aim to prove as a "goal". So blocking out our answer:

Suppose that .

[Proof of goes here]

Thus, .

So let's start analysing , remembering not to go too far with the logical notation. With baby steps, the definition of a union over a family of sets (here, the outer-most part of the logic) is:

Then, going one step further, using the definition of a power set:

Now we could go further at this point, applying the definition of a subset, but I'll stop the logical deconstruction here. In this instance, I've found that if I keep going so the entire lot is broken down into logical notation it somehow ends up getting a bit more confusing that it needs to be.

With this as our given, I notice the existential quantifier. Here, I can use "existential instantiation" to plug any value I want into and then assume that what follows is true. So at this point the new "given" is simply:

Nice and simple.

Let's update the outline of our proper proof answer:

Suppose that .

Let be such a value that .

[Proof of goes here]

Thus, .

So let's now move on to our "goal" that we have to prove: .

Again, starting from the outside, going in,

by the definition of a power set becomes:

I can't do a lot with this on it's own so I'll keep going with the logical deconstruction. By the definition of a subset, this becomes:

So now, using "universal instantiation" I can say that here, is arbitrary (for the sake of argument, it really can be anything), and that leaves us with an updated "givens" list of:

and

and a new "goal" of

Hey, but wait a sec... Look at our "givens"! If is in ... and is a subset of , then must be in ! -and that's our goal!

So update our proof:

Suppose that .

Let be such a value that .

Let be an arbitrary element of .

[Proof of goes here]

Therefore . As is arbitrary, we can conclude that .

Thus, .

So let's wrap this up.

Theorem:
SupposeĀ  is an indexed family of sets. Prove that .

Proof:
Suppose that . Let be such a value that , and let be an arbitrary element of . But if and , then . Therefore . As is arbitrary, we can conclude that . Thus, we conclude that .

Overall the task has involved unravelling the symbols into logic, making sure they flow together, and then wrapping them back up again.

See what I mean by convoluted? All that work for that one short answer. I must admit, I still don't know if my reasoning is 100% correct with this. Despite some parts of this seeming simple, this really is the very limit of what I'm capable of understanding at the moment. I picked this example to write up, as so far I've found it to be one of the most complicated.

The next section of the book seems to marry this quantifier work with another previous section about conjunctions and biconditionals, that I found to be quite enjoyable at the time. Then towards the end of the chapter, Velleman seems to sneak in some further proof examples using the terms epsilon and delta. I imagine this is a sneaky and clever way to get the reader comfortable with further Analysis study...

Alas, my study of Velleman's book will have to stop here. I understand a lot more than I did, though not everything there is to know. I feel it may be enough to give me a slightly smoother ride through my next module, which was the whole point of me picking this book up. It's been so good, I hope I have a chance to return to it. I feel later chapters would put me in an even better position for further proof work!

For now... the countdown begins for the release of my next module's materials!

# Things you need to be told at the beginning

These quotes are from pages 89 and 90 of Velleman's "How To Prove It". If only I'd read all this when I was first introduced to a proof, I wouldn't have been so stressed!

"When mathematicians quote proofs, they usually just write the steps needed to justify their conclusions with no explanation of how they thought of them."

"Although this lack of explanation sometimes makes proofs hard to read, it serves the purpose of keeping two distinct objectives separate: explaining your thought processes and justifying your conclusions."

"The primary purpose of a proof is to justify the claim that the conclusion follows from the hypotheses, and no explanation of your thought processes can substitute for adequate justification of this claim. Keeping any discussion of thought processes to a minimum in a proof helps to keep this distinction clear."

"Don't worry if you don't immediately understand the strategy behind the proof you are reading".

I could hug this book right now.

# End of the Quantifiers

A month and a week, and I've just come to the end of the second chapter. Reasonably happy with the progress, but I could be going a bit quicker... Mind you, over just two chapters I've now created 46 pages of A4 of exercises. So there has been a LOT of material to go through. Frankly, just these first two chapters have worked wonders for my understanding of logic and what proofs are founded upon.

This second chapter mainly introduced quantifiers. The concept of "for all x" and "there exists at least one x...", but quickly branched off into more involved set theory.

The biggest issue I had towards the end of the second chapter was that on a couple of occasions, I don't think I thought carefully enough about the kind of answer the questions required. ie: in this context, whether the answer was required in logical notation, or whether it was required in set theory notation. Translating between the two is something I certainly found tricky. As such, I decided to write my own definitions of notation in the form of a list (thanks Lara Alcock!). Though the lack of lists of definitions could be considered a slight shortfall of the book, I think I benefited from constructing my own notes and definitions.

I found that towards the end of the questions (because of the more lengthy logical notation) I was concentrating more on the definitions than what the notation actually meant. Not convinced this is so good for the learning process, but at least I'm mindful of it now.

Last little question the second chapter covered was Russell's Paradox, as discovered by Bertrand Russell in 1901. The fact that I'm being introduced to stuff like this in the second chapter is pretty cool. Very enjoyable!

Next up, proof technique!

# The Joy Of Sets

54 pages, and 5 large exercise sections later, I've finally finished the first chapter of "How To Prove It". With the first chapter being about sentential logic, I've now covered truth tables, derivations of logical operations, set theory, and the conditional and bi-conditional connectives.

The next chapter covers further foundational logical concepts and only in Chapter 3 are the intricacies of actual proofs discussed. Having taken this long to cover the first chapter, and looking at the amount of paper I've used to do all the exercises so far, I'm not that surprised I was finding proofs so difficult. It turns out my intuition was right, I was missing a lot of foundational knowledge.

So far, it's all been going well. Nothing I've looked at in this first chapter has left me mystified and overall I feel like I'm learning. This is exactly where I wanted to be... Just need to up the pace, perhaps...

# Books For Understanding Books - Part 2

So this is getting ridiculous. I know, I can only apologise. I'll write some maths on here at some point, I promise.

It turned out that the super-valuable forum post I had on the OU forums has now been deleted. Apparently if a post isn't pinned it gets auto-nuked after two months. So now all that valuable information is gone.

But let's not dwell on it. Especially when there's a new book looming!!!!!

Now, despite the fact that I've read Lara Alcock's books about how to learn Analysis, and started Brannan's Analysis book (see earlier posts) I realised I was missing more foundation-level knowledge. How To Prove It by Daniel J. Velleman looks like it'll be the book to give it to me.Ā  I remember it as being recommended on the deleted forum post, and the reviews generally are very very positive.

Already I've come to the end of the first (admittedly short) section and I can actually attempt all of the exercises! I totally understand everything he's saying and I really feel like I'm learning something with every page. At last!

More of a proper review of this on the way, but for the moment I'll be nose-deep in this for the next couple of months...