1 00:00:12,400 --> 00:00:16,560 Welcome back to Backspace Academy. In this lab we're going to be creating a 2 00:00:16,560 --> 00:00:20,960 python application using the serverless application model. 3 00:00:20,960 --> 00:00:26,080 Now the aws documentation on the SAM model can be a little bit 4 00:00:26,080 --> 00:00:30,880 confusing, so what I'm going to do is I'm going to 5 00:00:30,880 --> 00:00:34,640 guide you through a process that I use and it's quite simple. I don't use 6 00:00:34,640 --> 00:00:38,399 SAM Init because that creates a hello world application that has a 7 00:00:38,399 --> 00:00:40,960 whole heap of things that you don't really need. 8 00:00:40,960 --> 00:00:44,800 What I'm going to do is start off with a github repository that has 9 00:00:44,800 --> 00:00:47,840 simply two files, one of them will be the SAM 10 00:00:47,840 --> 00:00:52,079 template, the other one will be our application python code 11 00:00:52,079 --> 00:00:56,320 and from there we're going to use SAM build to build and prepare our 12 00:00:56,320 --> 00:01:01,120 application and then SAM deploy to deploy that application quite simply. 13 00:01:01,120 --> 00:01:05,840 So let's get into it. So let's go to the github repository. 14 00:01:05,840 --> 00:01:09,280 The links for this are in the lab notes so make sure that 15 00:01:09,280 --> 00:01:12,640 you download those lab notes and do all this yourself. 16 00:01:12,640 --> 00:01:16,560 So first thing I would like you to do is to just star this. So just click up here 17 00:01:16,560 --> 00:01:19,600 and star so that makes it easier for you to find it again when you 18 00:01:19,600 --> 00:01:22,799 go to your your github account and also it makes it easier for other people to 19 00:01:22,799 --> 00:01:25,600 find this code as well. So let's have a look 20 00:01:25,600 --> 00:01:30,159 at it so we've got our template.yaml we'll open that up and have a look at it. 21 00:01:30,159 --> 00:01:33,439 So the first thing there we've got is the template format version we need to 22 00:01:33,439 --> 00:01:37,680 have that. AWS needs to know what what the format 23 00:01:37,680 --> 00:01:41,439 version is. We also need to have transform and 24 00:01:41,439 --> 00:01:44,320 serverless. So that instructs Cloudformation that 25 00:01:44,320 --> 00:01:48,799 we're going to be using the serverless application model. So then 26 00:01:48,799 --> 00:01:52,240 we've defined some resources. So the first resource here is called 27 00:01:52,240 --> 00:01:57,600 create thumbnail and that is a lambda function. So the properties for 28 00:01:57,600 --> 00:02:00,799 this lambda function is that the code is going to be stored 29 00:02:00,799 --> 00:02:08,319 in a folder called code and our handler will be lambda function.py 30 00:02:08,319 --> 00:02:14,480 and inside of that lambda function.py file will be a lambda 31 00:02:14,480 --> 00:02:20,319 handler our runtime is going to be python 3.6 32 00:02:20,319 --> 00:02:23,840 and we have our timeout and we're going to use an AWS 33 00:02:23,840 --> 00:02:28,400 managed policy there which is aws lambda execute 34 00:02:28,400 --> 00:02:32,080 and then we're going to define a s3 event that's going to 35 00:02:32,080 --> 00:02:37,120 trigger this lambda function. So here we go it's a type s3 36 00:02:37,120 --> 00:02:40,480 and the properties. So we need to know what the bucket is that's going to be 37 00:02:40,480 --> 00:02:44,879 triggering this s3 event. So we're referencing a 38 00:02:44,879 --> 00:02:49,840 source bucket so that source bucket is over here, 39 00:02:49,840 --> 00:02:54,000 defined here and the event is going to be objects created. So 40 00:02:54,000 --> 00:02:59,920 whenever we put a object inside of this or upload a file to this bucket 41 00:02:59,920 --> 00:03:04,319 this source bucket then this s3 event will be triggered 42 00:03:04,319 --> 00:03:07,440 and that will invoke this lambda function. 43 00:03:07,440 --> 00:03:10,480 So the next thing we need to do is that we need to have a resource 44 00:03:10,480 --> 00:03:15,280 source bucket and so all we've done here is defined it as a type we haven't given 45 00:03:15,280 --> 00:03:18,720 a bucket name so cloudformation or the service 46 00:03:18,720 --> 00:03:22,400 application model will create this bucket and it will give 47 00:03:22,400 --> 00:03:26,720 it a unique name for us. So we also need a destination 48 00:03:26,720 --> 00:03:30,159 bucket so what this application is going to be doing 49 00:03:30,159 --> 00:03:34,000 is that we are going to upload images to a source 50 00:03:34,000 --> 00:03:37,920 bucket that will invoke our Lambda function 51 00:03:37,920 --> 00:03:41,920 and our lambda function will grab that image out of that source bucket 52 00:03:41,920 --> 00:03:45,440 it's going to resize our image to a thumbnail 53 00:03:45,440 --> 00:03:49,280 and then it's going to upload that thumbnail to a destination bucket and 54 00:03:49,280 --> 00:03:52,159 then the invocation of that lambda function 55 00:03:52,159 --> 00:03:55,360 will close. So we need to also have a destination 56 00:03:55,360 --> 00:03:59,519 bucket and so what we want is this destination bucket to have the same 57 00:03:59,519 --> 00:04:06,560 name as the source bucket but with -resized on the end. So what we're doing 58 00:04:06,560 --> 00:04:10,080 is again we've got type s3 bucket but this time we're saying it 59 00:04:10,080 --> 00:04:13,280 depends on source bucket. So we can't create this 60 00:04:13,280 --> 00:04:17,759 destination bucket unless that source bucket exists. 61 00:04:17,759 --> 00:04:20,239 So cloud formation will create the source 62 00:04:20,239 --> 00:04:24,639 bucket once that's done it will create the destination bucket 63 00:04:24,639 --> 00:04:28,160 and so the property of bucket name we need to join 64 00:04:28,160 --> 00:04:31,919 the name of that source bucket with dash resized 65 00:04:31,919 --> 00:04:37,360 on the end. So the way that we do that is we use the cloudformation function 66 00:04:37,360 --> 00:04:40,560 join and the way that works is that first 67 00:04:40,560 --> 00:04:46,160 first off we define what character we're going to put between these two 68 00:04:46,160 --> 00:04:49,199 strings that we are joining. So we're not going to put anything between it they're 69 00:04:49,199 --> 00:04:52,479 not going to put a dash or anything we've already defined the dash down here 70 00:04:52,479 --> 00:04:56,639 as dash dash resized. So we're just going to have two inverted 71 00:04:56,639 --> 00:04:59,440 quotes with nothing in the middle there. So that's just empty. 72 00:04:59,440 --> 00:05:02,639 There's no space. There there's no dash there's nothing 73 00:05:02,639 --> 00:05:05,520 and so the two strings that we're going to join the first one will be 74 00:05:05,520 --> 00:05:08,639 referencing the source bucket. So the name of that 75 00:05:08,639 --> 00:05:11,919 source bucket and then we're going to join that with -resized 76 00:05:11,919 --> 00:05:15,759 So this destination bucket will be the same 77 00:05:15,759 --> 00:05:19,440 name as a source bucket with -resized 78 00:05:19,440 --> 00:05:25,360 on the end. So that's pretty cool. So what we'll do now is that we will go 79 00:05:25,360 --> 00:05:29,440 back to our cloud 9 ide and we're going to clone this 80 00:05:29,440 --> 00:05:34,000 repository. So what we do is first off iIm going to 81 00:05:34,000 --> 00:05:38,560 copy, go back to the start of this and copy 82 00:05:38,560 --> 00:05:47,039 the url of that repository and i'm going to do git clone 83 00:05:47,039 --> 00:05:53,120 and then paste in the name of that or the url for that repository 84 00:05:53,120 --> 00:05:57,520 and so there we can see it's created or it's cloned that repository. So we've got 85 00:05:57,520 --> 00:06:02,960 our template.yml and we've got our code 86 00:06:02,960 --> 00:06:06,000 folder there as well which is going to have our lambda function.py 87 00:06:06,000 --> 00:06:14,000 Now before we go any further we need to make sure that the python or 88 00:06:14,000 --> 00:06:16,080 the python version that is running on our 89 00:06:16,080 --> 00:06:21,280 ec2 environment in cloud9 here that the python version is going to be 90 00:06:21,280 --> 00:06:25,199 the same as that we define in our template.yaml 91 00:06:25,199 --> 00:06:28,000 So in our template.yml we'll just open that up and now 92 00:06:28,000 --> 00:06:32,800 we're defining python 3.6 so the current version of python is far 93 00:06:32,800 --> 00:06:36,479 more advanced than that currently but cloud9 tends to 94 00:06:36,479 --> 00:06:39,919 lag a little bit so i'm just going to do python 95 00:06:39,919 --> 00:06:44,960 and --version and then we can see we're running python 96 00:06:44,960 --> 00:06:49,520 3.6 on cloud9 even though the current version of python is 97 00:06:49,520 --> 00:06:53,680 well advanced from that already. I think it's about 3.8 or something like that. 98 00:06:53,680 --> 00:06:57,440 So we'll leave that as it is so 3.6 is fine. 99 00:06:57,440 --> 00:07:03,680 So what we need to do now is that we need to create our requirements.txt file 100 00:07:03,680 --> 00:07:06,800 and that will define all of the or we'll list 101 00:07:06,800 --> 00:07:11,599 all of the modules that or packages that our application requires. 102 00:07:11,599 --> 00:07:17,280 We need to first off jump into our into our code directory. So i'm just 103 00:07:17,280 --> 00:07:21,360 going to cut and paste from the lab notes. 104 00:07:22,319 --> 00:07:26,160 Now with python unlike with node.js you need to create a 105 00:07:26,160 --> 00:07:31,759 virtual environment in which to install any modules that are outside of the 106 00:07:31,759 --> 00:07:35,520 standard python code. So what we need to do is we 107 00:07:35,520 --> 00:07:39,039 need to create that and it's just an anomaly of python. 108 00:07:39,039 --> 00:07:43,360 It's not a lambda thing. It's purely a python thing. So what we 109 00:07:43,360 --> 00:07:47,840 need to do is we need to create this virtual environment 110 00:07:52,720 --> 00:07:57,120 and i'll just cut it cut and paste from the lab notes here 111 00:07:57,120 --> 00:08:02,319 and then we need to activate this and again just copying and pasting from 112 00:08:02,319 --> 00:08:05,520 the lab notes. So activate that virtual environment. So now 113 00:08:05,520 --> 00:08:09,680 we can see that we're inside this virtual environment here. 114 00:08:09,680 --> 00:08:13,039 What we need to do now is that we need to install those modules that our 115 00:08:13,039 --> 00:08:15,680 application code needs so it needs pillow. 116 00:08:15,680 --> 00:08:21,919 Now pillow has an image module in there or an image package or a module inside 117 00:08:21,919 --> 00:08:26,319 of the pillow package and that can help us to resize 118 00:08:26,319 --> 00:08:29,680 these images and also boto3 which is required for 119 00:08:29,680 --> 00:08:33,520 python coding on aws. 120 00:08:38,959 --> 00:08:42,080 Okay so they've all been installed. So what we need to do now is that we need 121 00:08:42,080 --> 00:08:46,959 to create this requirements.txt file that will list all of these 122 00:08:46,959 --> 00:08:50,959 requirements or these additional packages that are required. 123 00:08:50,959 --> 00:08:55,120 The way we do that is that we have them installed now on this ec2 124 00:08:55,120 --> 00:08:57,360 environment. So what we can do is we can do a 125 00:08:57,360 --> 00:09:00,800 pip freeze which will have a look at what our current environment is and it will 126 00:09:00,800 --> 00:09:06,240 put that all into a requirements.txt So again copy and paste from the lab notes 127 00:09:06,240 --> 00:09:10,240 pip freeze and so now we can see on the left 128 00:09:10,240 --> 00:09:13,200 hand side here we've got a requirements.txt file that's been 129 00:09:13,200 --> 00:09:15,839 created. So we double click on that and we can 130 00:09:15,839 --> 00:09:19,760 see that now we've got pillow in there we've got boto in there everything that 131 00:09:19,760 --> 00:09:24,720 we need to run this application. So just close out of that and so what we 132 00:09:24,720 --> 00:09:27,920 can do now is deactivate this virtual environment we don't need 133 00:09:27,920 --> 00:09:30,480 it anymore 134 00:09:32,560 --> 00:09:36,800 and what we can do now is delete this this folder with a virtual environment 135 00:09:36,800 --> 00:09:40,720 here as well so just select it and delete it. 136 00:09:41,920 --> 00:09:45,839 Okay so that's everything we need for sam to build this application. We've got 137 00:09:45,839 --> 00:09:50,160 the template.yaml. We've got our Lambda function.py and 138 00:09:50,160 --> 00:09:52,959 we've got our requirements.txt So simply three 139 00:09:52,959 --> 00:09:58,000 files is all we need to to to build this. so we just go to sam 140 00:09:58,000 --> 00:10:02,399 and actually first of all we need to go back up to where our template.yaml is. 141 00:10:02,399 --> 00:10:07,279 We just go cd and up one file or one folder. We'll do 142 00:10:07,279 --> 00:10:10,880 an ls here make sure that it's in there. So that's where our template.yaml file 143 00:10:10,880 --> 00:10:13,920 is. So we need to do our sam build from where that is. 144 00:10:13,920 --> 00:10:19,680 So do sam build and sam will now build that application and prepare it for sam 145 00:10:19,680 --> 00:10:25,040 deploy. So that that was successful. All we 146 00:10:25,040 --> 00:10:28,959 can do now is we just need to deploy that so sam deploy and we're 147 00:10:28,959 --> 00:10:32,720 going to do --guided and that will ask us a few questions 148 00:10:32,720 --> 00:10:38,560 about how we want this to be deployed, so our stack name is going to be sam app 149 00:10:38,560 --> 00:10:41,360 that will be fine we'll just select that that's okay, 150 00:10:41,360 --> 00:10:46,399 region us-east-one that's fine, confirm changes before deploying, that's always a 151 00:10:46,399 --> 00:10:51,120 good idea and we wanted to create a role yes, save 152 00:10:51,120 --> 00:10:54,160 the arguments to sam config so that means that next time 153 00:10:54,160 --> 00:10:58,079 we can just type in sound deploy and i'll pick up whatever's in our sam 154 00:10:58,079 --> 00:11:04,880 config, we don't have to do guided again. Now it's going to be creating this 155 00:11:04,880 --> 00:11:08,880 changeset of what we're going to be deploying 156 00:11:09,680 --> 00:11:15,839 and yes we want to deploy this change set 157 00:11:18,079 --> 00:11:21,760 so now it's going to start to create all of those resources so it'll create the 158 00:11:21,760 --> 00:11:24,560 IAM role and the permissions, it'll create the 159 00:11:24,560 --> 00:11:28,160 Lambda function and it'll create those two buckets, the 160 00:11:28,160 --> 00:11:32,000 source bucket and the destination bucket. 161 00:11:33,040 --> 00:11:36,720 Okay so after about five minutes or so everything has been deployed so let's 162 00:11:36,720 --> 00:11:40,480 scroll up and have a look and see what's being created here. We've created 163 00:11:40,480 --> 00:11:45,519 a role then we've created that lambda function create thumbnail, 164 00:11:45,519 --> 00:11:51,040 we've created some permissions for that and we've also created two buckets, 165 00:11:51,040 --> 00:11:54,480 the s3 bucket there called source bucket and also our 166 00:11:54,480 --> 00:11:58,880 destination bucket and there is our cloudformation stack 167 00:11:58,880 --> 00:12:02,000 called sam app. so we go into cloudformation we'll 168 00:12:02,000 --> 00:12:04,800 be able to find that so what we'll do now is we're going to 169 00:12:04,800 --> 00:12:09,120 the s3 management console and what we're going to do is find these 170 00:12:09,120 --> 00:12:11,680 two buckets, so i'm just going to refresh the screen 171 00:12:11,680 --> 00:12:13,920 here 172 00:12:16,800 --> 00:12:20,959 so here we go we've got a sam app dash source bucket and that's obviously the 173 00:12:20,959 --> 00:12:25,120 one that was created for us and we've also got the same name with 174 00:12:25,120 --> 00:12:28,160 -resize so that's our destination bucket, so we're going to 175 00:12:28,160 --> 00:12:39,839 go into this source bucket now and we're going to upload an image to it 176 00:12:41,839 --> 00:12:47,040 and just click next for everything and upload that, 177 00:12:47,760 --> 00:12:53,279 so that's uploaded fine, so if we now go back to s3 we'll just open it up in a 178 00:12:53,279 --> 00:12:58,639 new window and we open up the resized one we should 179 00:12:58,639 --> 00:13:02,079 have that resized object in there, so there we go 180 00:13:02,079 --> 00:13:05,839 it worked, so let's have a look, so we've got 181 00:13:05,839 --> 00:13:11,920 data.jpg which is 150k or 147k 182 00:13:11,920 --> 00:13:15,360 and our resized one is only 23k so that's our thumbnail, 183 00:13:15,360 --> 00:13:18,800 it was created successfully, so everything worked fine 184 00:13:18,800 --> 00:13:23,279 so let's have a look at lambda, so what we've got here in lambda, so if i go to 185 00:13:23,279 --> 00:13:29,360 the dashboard of Lambda and go to applications this time and there is our 186 00:13:29,360 --> 00:13:33,600 sam application called sam app, it's created 187 00:13:33,600 --> 00:13:35,839 complete, 188 00:13:36,800 --> 00:13:40,160 so if we click on it we can see the resources here, so we've got our create 189 00:13:40,160 --> 00:13:43,680 thumbnail which is our lambda function and our two 190 00:13:43,680 --> 00:13:46,240 s3 buckets, so let's have a look at our lambda 191 00:13:46,240 --> 00:13:50,320 function, so just click on here and that will open up our lambda 192 00:13:50,320 --> 00:13:54,959 function for us and we can see there it is, 193 00:13:55,120 --> 00:13:58,320 so let's let's click on monitoring and see what 194 00:13:58,320 --> 00:14:05,680 what happened when when we ran it, so because we've instantiated that 195 00:14:05,680 --> 00:14:08,720 that that Lambda function once there will be logs in there 196 00:14:08,720 --> 00:14:13,519 until we actually instantiate a lambda function there will be no cloudwatch 197 00:14:13,519 --> 00:14:16,000 logs created, so now there will be because we've 198 00:14:16,000 --> 00:14:19,040 actually used it once 199 00:14:19,839 --> 00:14:23,519 and just scrolling down we can see we've got that log stream, we'll just click on 200 00:14:23,519 --> 00:14:28,160 that, okay so there we can see we've got the 201 00:14:28,160 --> 00:14:30,880 print statements that were in our code have now been 202 00:14:30,880 --> 00:14:34,480 outputted here and stored on cloudwatch logs, 203 00:14:34,480 --> 00:14:38,320 so i've got here the function loaded successfully, downloaded the source image 204 00:14:38,320 --> 00:14:41,839 from s3, resizing image and then uploading the 205 00:14:41,839 --> 00:14:46,560 resized image to amazon s3, so that's all looking like it worked 206 00:14:46,560 --> 00:14:49,040 perfectly, so what we'll do now is have a look at 207 00:14:49,040 --> 00:14:53,519 the code and see how it all worked, so just going in here to the lambda 208 00:14:53,519 --> 00:14:56,880 function.py and let's have a look at that, so the 209 00:14:56,880 --> 00:15:03,360 first line here import boto3, so boto3 is the aws 210 00:15:03,360 --> 00:15:08,320 python software development kit, then we've got some imports here for some 211 00:15:08,320 --> 00:15:11,680 things that we're using or some packages that we're using within the 212 00:15:11,680 --> 00:15:15,199 code, now that is all part of the standard 213 00:15:15,199 --> 00:15:18,000 python with with lambda so we didn't have to 214 00:15:18,000 --> 00:15:23,600 have that in our requirements.txt but we are now importing pillow and in 215 00:15:23,600 --> 00:15:28,480 particular the image module from pillow, so that was 216 00:15:28,480 --> 00:15:32,000 we needed to install that and put that into our requirements.txt, so that was 217 00:15:32,000 --> 00:15:36,480 part of our build package and so we're importing PIL.image 218 00:15:36,480 --> 00:15:40,160 and we're going to be using that to resize these images that were uploaded 219 00:15:40,160 --> 00:15:44,079 to amazon s3 and then we're just 220 00:15:44,079 --> 00:15:48,399 doing s3 or creating a variable s3 client which is going to be 221 00:15:48,399 --> 00:15:52,800 creating an object to work with amazon s3 through the boto 222 00:15:52,800 --> 00:15:56,399 software development kit, so what we've got here 223 00:15:56,399 --> 00:16:03,199 is outside of our lambda handler and that is always good practice, 224 00:16:03,199 --> 00:16:06,720 so by having that outside of the handler it 225 00:16:06,720 --> 00:16:10,399 means that our lambda function is going to load quite 226 00:16:10,399 --> 00:16:13,440 quickly, if we have our lambda function quite or 227 00:16:13,440 --> 00:16:18,000 a lambda handler quite large then it's going to slow down 228 00:16:18,000 --> 00:16:21,279 the invoking of that lambda function, so 229 00:16:21,279 --> 00:16:24,480 what's happening here is it's resizing the image and so that 230 00:16:24,480 --> 00:16:28,160 image will be located in temporary storage 231 00:16:28,160 --> 00:16:33,040 in that lambda function and then it's going to resize our image to a width of 232 00:16:33,040 --> 00:16:37,040 200 and then it's going to save that 233 00:16:37,040 --> 00:16:41,680 again in temporary storage in another location in the Lambda functions 234 00:16:41,680 --> 00:16:44,959 temporary storage, so when the lambda function is 235 00:16:44,959 --> 00:16:48,320 terminated that temporary storage and even 236 00:16:48,320 --> 00:16:51,519 anything in that temporary storage will also be 237 00:16:51,519 --> 00:16:57,040 terminated, so here we've got the lambda handler and 238 00:16:57,040 --> 00:17:00,160 it's going to get the bucket name from the 239 00:17:00,160 --> 00:17:04,319 event that is passed to this Lambda function, 240 00:17:04,319 --> 00:17:08,079 So where do we get that information from? We need to go to the 241 00:17:08,079 --> 00:17:12,400 s3 documentation, so i'm just going to get that up now 242 00:17:12,400 --> 00:17:15,600 and so if we go to the s3 developer guide, 243 00:17:15,600 --> 00:17:20,240 if we go to notifications and then we go to event message structure, 244 00:17:20,240 --> 00:17:23,520 it will give you the format of the json object 245 00:17:23,520 --> 00:17:29,679 that is passed to the Lambda function when it is invoked from 246 00:17:29,679 --> 00:17:33,280 this s3 event, so here we can see we've got an array of 247 00:17:33,280 --> 00:17:38,000 records, so we're only interested in the first record in that array and so 248 00:17:38,000 --> 00:17:43,200 that will be records zero and then we're interested in s3 249 00:17:43,200 --> 00:17:46,720 bucket and name to get the the name of our source 250 00:17:46,720 --> 00:17:52,640 bucket and we're also interested in s3 object and key which will be the name of 251 00:17:52,640 --> 00:17:58,960 the object or the file that we uploaded to that to that bucket, 252 00:17:58,960 --> 00:18:02,640 so now we can go back to cloud9 and have a look, so there we can see 253 00:18:02,640 --> 00:18:06,240 so we've got records and the first record in that array 254 00:18:06,240 --> 00:18:10,240 s3 bucket and name and then here we've got 255 00:18:10,240 --> 00:18:15,919 records again s3 object and key for our key name, 256 00:18:15,919 --> 00:18:20,720 so the first thing we need to do is that we're going to remove any bad or 257 00:18:20,720 --> 00:18:27,200 any non-ascii characters and spaces from that key, so we're going to do that there 258 00:18:27,200 --> 00:18:30,400 and then once we've done that, what we can do is we can 259 00:18:30,400 --> 00:18:35,760 download that image from amazon s3 and then we can store it 260 00:18:35,760 --> 00:18:38,799 in temporary storage, so that's what we're doing here, so we're defining where 261 00:18:38,799 --> 00:18:43,200 that temporary storage is, so the tmp folder of this lambda 262 00:18:43,200 --> 00:18:46,080 function is where temporary storage is and so 263 00:18:46,080 --> 00:18:48,240 that lambda function when it's terminated 264 00:18:48,240 --> 00:18:52,240 that temporary storage will also be terminated and lost, 265 00:18:52,240 --> 00:18:56,720 so what we do is s3 client, so again we've defined that 266 00:18:56,720 --> 00:19:01,200 as part of our boto software development kit 267 00:19:01,200 --> 00:19:05,600 and then we do download file, the name of the bucket, 268 00:19:05,600 --> 00:19:11,120 the key and then the path the which will be the temporary storage that 269 00:19:11,120 --> 00:19:16,720 we're going to be storing this this image when we download it, 270 00:19:16,720 --> 00:19:20,240 once we've got it into our temporary storage, 271 00:19:20,240 --> 00:19:24,000 we're going to create an upload or define an upload path which will 272 00:19:24,000 --> 00:19:28,160 also be temporary storage and then we're going to 273 00:19:28,160 --> 00:19:32,240 resize that image, so we're going up here to the resize 274 00:19:32,240 --> 00:19:37,600 image function there and we're going to resize that to 200, 275 00:19:37,600 --> 00:19:42,320 doing image.resize and then we're going to do image.save to save it to 276 00:19:42,320 --> 00:19:45,039 that temporary storage on this lambda 277 00:19:45,039 --> 00:19:48,559 function, so once we've resized the image, put it 278 00:19:48,559 --> 00:19:52,960 into temporary storage, then we can upload it to our destination 279 00:19:52,960 --> 00:19:59,360 s3 bucket, so here we go so s3 client and then then we're going to do upload 280 00:19:59,360 --> 00:20:04,400 file, so the upload path which will be the 281 00:20:04,400 --> 00:20:09,840 temporary storage location that we've got our resized image in 282 00:20:09,840 --> 00:20:13,520 and then the bucket that we want it to go to 283 00:20:13,520 --> 00:20:18,559 and then the actual name that we're going to save it as there which is a key, 284 00:20:18,559 --> 00:20:21,600 so that's what we're doing. 285 00:20:21,600 --> 00:20:24,960 We're getting an object and we're downloading it to 286 00:20:24,960 --> 00:20:29,200 storage, getting an image from amazon s3 we're downloading it to temporary 287 00:20:29,200 --> 00:20:33,120 storage, we're resizing that image to width of 200 288 00:20:33,120 --> 00:20:36,240 and then downloading that again to temporary storage 289 00:20:36,240 --> 00:20:39,840 and then once we've got that all done and ready we're going to upload that 290 00:20:39,840 --> 00:20:46,000 from temporary storage up to that amazon s3 destination bucket 291 00:20:46,000 --> 00:20:49,760 and that's how it all works. So I hope you understand all that. It's 292 00:20:49,760 --> 00:20:53,200 pretty straightforward i think and what we'll do now is 293 00:20:53,200 --> 00:20:56,320 we'll finish up with the lab and we'll clean up everything so the first thing 294 00:20:56,320 --> 00:21:00,720 we need to do is to empty these s3 buckets so go into the s3 295 00:21:00,720 --> 00:21:05,440 bucket and actions and delete so delete that 296 00:21:05,440 --> 00:21:10,159 object there but we don't delete the actual bucket we let cloud formation do 297 00:21:10,159 --> 00:21:13,840 that for us and that one there again we'll delete 298 00:21:13,840 --> 00:21:17,840 that one as well 299 00:21:19,440 --> 00:21:26,559 okay so now we can go into into cloud formation and there is our 300 00:21:26,559 --> 00:21:30,240 sam app and we can delete that 301 00:21:30,240 --> 00:21:34,480 so that will delete that stack and any resources that have been created by that 302 00:21:34,480 --> 00:21:39,760 stack so it will delete those s3 buckets it will delete the sam application in 303 00:21:39,760 --> 00:21:44,480 lambda and also the functions and also the iam role that was 304 00:21:44,480 --> 00:21:47,840 created so that brings us to the end of a pretty 305 00:21:47,840 --> 00:21:51,280 advanced lab and i hope i've broken all down to 306 00:21:51,280 --> 00:21:54,799 something quite simple because it can be quite confusing when you're reading the 307 00:21:54,799 --> 00:21:59,919 aws documentation so this is i think quite a simple way of doing things 308 00:21:59,919 --> 00:22:06,960 and i hope you enjoyed it and i look forward to seeing you in the next labs