1 00:00:13,309 --> 00:00:17,640 Welcome back to BackSpace Academy. In this lecture we're going to talk about 2 00:00:17,640 --> 00:00:22,680 the Lambda service. We'll start off by talking about runtimes. What they are and 3 00:00:22,680 --> 00:00:27,599 how they can set up our lambda instance when it's invoked and also we'll talk 4 00:00:27,599 --> 00:00:32,430 about how a Lambda function executes. Then we'll talk about how we can develop 5 00:00:32,430 --> 00:00:38,520 our code and package it and then deploy it to a Lambda function. We'll talk about 6 00:00:38,520 --> 00:00:42,840 the different ways in which a Lambda function can be invoked. We'll talk about 7 00:00:42,840 --> 00:00:47,579 how our Lambda function can connect to the internet, can connect to private 8 00:00:47,579 --> 00:00:52,620 subnets, can connect to databases. Then we'll talk about how can get the maximum 9 00:00:52,620 --> 00:00:57,030 performance out of our Lambda function and then finally we'll talk about how we 10 00:00:57,030 --> 00:01:04,589 can monitor the performance of our Lambda function. A runtime is 11 00:01:04,589 --> 00:01:10,530 run when the Lambda function is invoked. The first thing that it does it will 12 00:01:10,530 --> 00:01:14,700 read the function setup code and that could be setting up that lambda instance 13 00:01:14,700 --> 00:01:20,040 for running Nodejs or Python or whatever and there may be other things that need 14 00:01:20,040 --> 00:01:25,369 to be set up as well for that Lambda function or that Lambda invoked instance. 15 00:01:25,369 --> 00:01:31,829 Once that Lambda instance has been set up then the runtime will read the 16 00:01:31,829 --> 00:01:36,869 handler name from an environment variable. Now the handler, that is a code 17 00:01:36,869 --> 00:01:42,299 that you supply with maybe Nodejs or maybe Python whatever, but that is your 18 00:01:42,299 --> 00:01:47,939 code that you are going to be running on this Lambda instance. It will then read 19 00:01:47,939 --> 00:01:54,390 the invocation events. So for example if we had an s3 event that is triggered 20 00:01:54,390 --> 00:01:59,969 when we upload an object into an s3 bucket then that event data will include 21 00:01:59,969 --> 00:02:03,899 a whole heap of things. Among others it will have the s3 bucket name, it will 22 00:02:03,899 --> 00:02:09,360 have the s3 object that was uploaded. So all of the information that was within 23 00:02:09,360 --> 00:02:13,860 that event will be passed from the Lambda 24 00:02:13,860 --> 00:02:20,720 runtime API and that information will then be passed over to the handler to process. 25 00:02:20,720 --> 00:02:26,460 Then once a handler has done its bit and has done its code, it will post that 26 00:02:26,460 --> 00:02:32,670 response back to the runtime and the runtime will then forward that on to the 27 00:02:32,670 --> 00:02:37,890 lambda runtime API. Now there are managed run times that are 28 00:02:37,890 --> 00:02:49,110 available for Nodejs, Python, Java, dotnet, Ruby and Go, now theoretically you can 29 00:02:49,110 --> 00:02:55,739 create a custom runtime for any language that you want you're not limited to the 30 00:02:55,739 --> 00:03:00,750 AWS managed runtimes but of course it's going to be a lot of work for you to 31 00:03:00,750 --> 00:03:06,209 create your own custom runtime but if you want to do that it is possible. So a 32 00:03:06,209 --> 00:03:11,190 custom run time it will run in the standard lambda execution environment, as 33 00:03:11,190 --> 00:03:16,489 before it will use a runtime api to receive events and then send those 34 00:03:16,489 --> 00:03:23,730 responses back. So the way that you set up a custom run time is that in your 35 00:03:23,730 --> 00:03:28,260 functions runtime definition when you're creating that function you need to set 36 00:03:28,260 --> 00:03:35,430 that to provided. Once you've done that you need to create a script and that 37 00:03:35,430 --> 00:03:39,510 could be a shell script it could be a script in a language that 38 00:03:39,510 --> 00:03:44,870 is included in the Amazon Linux distribution or it could be a binary 39 00:03:44,870 --> 00:03:50,760 executable file. Once you've created that you put that inside of your deployment 40 00:03:50,760 --> 00:03:55,890 package or within a layer within your deployment package and the entry point 41 00:03:55,890 --> 00:04:01,320 that Lambda will be looking for provided the functions runtime is set to provide 42 00:04:01,320 --> 00:04:06,480 it it will then look for an entry point called bootstrap. So that bootstrap will 43 00:04:06,480 --> 00:04:11,579 be a script that will start to run that runtime executable or that runtime 44 00:04:11,579 --> 00:04:14,239 script. 45 00:04:15,090 --> 00:04:20,680 In order to understand lambda properly we need to understand the execution 46 00:04:20,680 --> 00:04:26,800 context of an instance and this is going to be quite different to an ec2 instance 47 00:04:26,800 --> 00:04:33,190 because these are temporary short-lived instances, so the execution context is a 48 00:04:33,190 --> 00:04:38,740 temporary runtime environment that initializes those external dependencies 49 00:04:38,740 --> 00:04:43,570 that our handler requires, our main code will reference a lot of external 50 00:04:43,570 --> 00:04:47,680 dependencies, they could be libraries they could be packages whatever they are 51 00:04:47,680 --> 00:04:55,060 and Lambda will maintain the execution context for some time allowing reuse 52 00:04:55,060 --> 00:05:00,130 because the way that works is that when we invoke a lambda function it's going 53 00:05:00,130 --> 00:05:05,050 to take time to initialize those external dependencies. So if we can reuse 54 00:05:05,050 --> 00:05:12,070 those it's going to make the invocation time, that the latency to launch these 55 00:05:12,070 --> 00:05:17,380 temporary instances, a lot quicker. The way that we can take advantage of the 56 00:05:17,380 --> 00:05:22,900 execution context to reduce the invocation latency, the latency that it 57 00:05:22,900 --> 00:05:28,360 takes to invoke these instances, is that we can have our objects declare outside 58 00:05:28,360 --> 00:05:34,900 of the functions Handler and so when that function is disposed those methods 59 00:05:34,900 --> 00:05:40,449 that are outside of that function handler method remain initialized. 60 00:05:40,449 --> 00:05:44,260 That could be a database connection and so database connections they can take 61 00:05:44,260 --> 00:05:48,070 quite a bit of time, that could take quite a few seconds for those to occur, 62 00:05:48,070 --> 00:05:54,220 so keeping those database connections open and to be reused by other instances 63 00:05:54,220 --> 00:05:59,349 makes sense. Another thing that we need to keep in mind is that we have a 64 00:05:59,349 --> 00:06:05,139 temporary directory on all of our lambda instances. So normally when that instance 65 00:06:05,139 --> 00:06:10,630 is terminated that temporary directory contents will also be lost, but for a 66 00:06:10,630 --> 00:06:16,680 certain amount of time that remains and can be reused by other instances. Also 67 00:06:16,680 --> 00:06:22,780 background processes and callbacks, if they're defined outside of the function 68 00:06:22,780 --> 00:06:28,630 handler, may remain for a certain amount of time as well and so care must 69 00:06:28,630 --> 00:06:34,930 be taken because you may lose content if it's in that temporary directory. So that 70 00:06:34,930 --> 00:06:40,420 is temporary and make sure that you treat it as temporary also any unwanted 71 00:06:40,420 --> 00:06:45,070 background processes that may still exist you need to make sure that those 72 00:06:45,070 --> 00:06:50,400 are terminated when the instance is terminated. 73 00:06:53,690 --> 00:07:00,240 In order to deploy a Lambda function you need to create a Lambda deployment 74 00:07:00,240 --> 00:07:05,160 package and that is simply a zip archive that contains all of your function code 75 00:07:05,160 --> 00:07:10,890 and its dependencies. You can create that zip archive and you can upload it 76 00:07:10,890 --> 00:07:15,480 using the console when you're creating your Lambda function or you can put it 77 00:07:15,480 --> 00:07:21,360 into an Amazon s3 bucket and reference at s3 bucket. If the deployment package 78 00:07:21,360 --> 00:07:27,990 is greater than 50 MB you must use Amazon s3. Now your external packages 79 00:07:27,990 --> 00:07:32,610 they don't need to be in that zip archive you can reference those in a 80 00:07:32,610 --> 00:07:38,190 package JSON if it's a Nodejs application or in a requirements.txt 81 00:07:38,190 --> 00:07:43,440 if it's a Python application and from there you can use the Sam CLI build 82 00:07:43,440 --> 00:07:48,330 command which can create that package and put that in the correct location 83 00:07:48,330 --> 00:07:51,830 ready for deployment. 84 00:07:52,400 --> 00:07:58,169 When we have created our Lambda deployment package in a zip archive we 85 00:07:58,169 --> 00:08:02,159 can go to the Lambda console and create that function and simply supply that 86 00:08:02,159 --> 00:08:07,500 code or else reference an s3 bucket that has that code. We can also use cloud 87 00:08:07,500 --> 00:08:12,509 formation and we do that by having a type AWS Lambda function and we need to 88 00:08:12,509 --> 00:08:17,789 define some properties the code which will be the s3 bucket that contains that 89 00:08:17,789 --> 00:08:22,860 code or if it's not very long we can just put the code as inline code 90 00:08:22,860 --> 00:08:28,469 in that in that code property. We also have the handler and that will be the 91 00:08:28,469 --> 00:08:33,839 code that Lambda calls to execute your function. So that will be the file and 92 00:08:33,839 --> 00:08:39,930 also the name of the handler we also need to have a role for the Lambda 93 00:08:39,930 --> 00:08:43,589 function. So that will define what permissions are that Lambda function 94 00:08:43,589 --> 00:08:48,240 will have so we need to define an ARN or Amazon resource number of that role 95 00:08:48,240 --> 00:08:52,260 and the type of runtime, what language are we going to be using is it going 96 00:08:52,260 --> 00:08:56,399 to be Nodejs, is it going to be Python, Java. We need to define that as well 97 00:08:56,399 --> 00:09:01,110 and it's a good idea to give it a function name but if it's missing cloud 98 00:09:01,110 --> 00:09:06,209 formation will generate a unique one for you. We can also use the command line 99 00:09:06,209 --> 00:09:10,649 interface or one of the many software development kits and we use a create 100 00:09:10,649 --> 00:09:16,380 function command to do that and again we will need to define the code the handler 101 00:09:16,380 --> 00:09:21,110 the role and the runtime for that command. 102 00:09:22,210 --> 00:09:29,960 The serverless application model or Sam for short is a part of the cloud formation 103 00:09:29,960 --> 00:09:36,529 specification and it can simplify the the deployment of our serverless 104 00:09:36,529 --> 00:09:41,089 applications that may include not only Lambda but other services servers such 105 00:09:41,089 --> 00:09:46,430 as API gateway. In order to tell cloud formation that we're going to be using 106 00:09:46,430 --> 00:09:52,430 the serverless application model, in our template we need to have transform and 107 00:09:52,430 --> 00:09:59,000 then AWS service command in there and then after that we need to define our 108 00:09:59,000 --> 00:10:04,730 resources. As in any cloud formation template we must have resources. There 109 00:10:04,730 --> 00:10:09,200 could be a combination of cloud formation and Sam resources. It could 110 00:10:09,200 --> 00:10:14,600 be a function, api, state machine, a dynamodb table, but it could also have a 111 00:10:14,600 --> 00:10:19,640 reference as well to an ec2 instance. It doesn't have to be simply serverless 112 00:10:19,640 --> 00:10:24,589 resources that we're defining. There is also something that is unique to the 113 00:10:24,589 --> 00:10:29,920 Sam model and that is the Global's and those that are used to define shared 114 00:10:29,920 --> 00:10:34,910 configurations throughout this template. For example if all of your 115 00:10:34,910 --> 00:10:38,540 instances that you are launching your Lambda instances are going to be using 116 00:10:38,540 --> 00:10:44,060 the same NodeJS runtime, then you can use that runtime across all of those. You 117 00:10:44,060 --> 00:10:48,500 don't need to, you can define it as a global and then share that across all of 118 00:10:48,500 --> 00:10:51,680 those. You don't need to define that again and the same with the the handler 119 00:10:51,680 --> 00:10:56,060 as well if you want to reuse that handler without having to define it for 120 00:10:56,060 --> 00:11:02,000 every single one of those those definitions for a function and so we 121 00:11:02,000 --> 00:11:06,980 have the Sam command-line interface and we can use the Sam build command and 122 00:11:06,980 --> 00:11:12,050 that will prepare our code for deployment. It will read our Nodejs 123 00:11:12,050 --> 00:11:18,020 package JSON or our Python requirements txt file and it will prepare that. 124 00:11:18,020 --> 00:11:21,709 It will build our application ready for deployment and put it into the right 125 00:11:21,709 --> 00:11:26,690 location for us and then we simply do Sam deploy and that will deploy our 126 00:11:26,690 --> 00:11:32,300 application and then that application will appear in the Lambda console as an 127 00:11:32,300 --> 00:11:36,430 application, and those Sam apps can be simply 128 00:11:36,430 --> 00:11:45,010 managed using the Lambda console. If we make changes to our lambda functions 129 00:11:45,010 --> 00:11:50,589 code, and we don't want to affect the current production version that is being 130 00:11:50,589 --> 00:11:56,860 used what we can do is that we can publish a new version of that Lambda 131 00:11:56,860 --> 00:12:01,000 function, and what that will do is that will create a new function that will 132 00:12:01,000 --> 00:12:06,220 have a totally different Amazon resource name and so that won't affect our 133 00:12:06,220 --> 00:12:10,540 current version which will have a different ARN. What we can do is that 134 00:12:10,540 --> 00:12:14,110 we can make those changes to the code and then we can publish that new version 135 00:12:14,110 --> 00:12:19,600 and that new version will include that code and any dependencies. It will 136 00:12:19,600 --> 00:12:25,000 include the runtime from the previous version and any settings including the 137 00:12:25,000 --> 00:12:30,130 environment variables. The way we create a new version is that we go into the 138 00:12:30,130 --> 00:12:36,279 Lambda console. We select our function. We make the changes to our code and then we 139 00:12:36,279 --> 00:12:42,220 simply publish a new version and that will create a new function with a new ARN 140 00:12:42,220 --> 00:12:47,769 or we can go into the Amazon command line interface and we can use the 141 00:12:47,769 --> 00:12:53,050 command Lambda publish - version. Now one thing that you need to take into 142 00:12:53,050 --> 00:12:58,680 consideration is that when you have a changed ARN of a Lambda function any 143 00:12:58,680 --> 00:13:04,180 resource to base policies that reference that ARN won't work anymore and also 144 00:13:04,180 --> 00:13:10,089 any events that reference that ARN. For example you may have an s3 event that 145 00:13:10,089 --> 00:13:15,279 triggers when something is uploaded to an s3 bucket and he that is referencing 146 00:13:15,279 --> 00:13:20,829 the old ARN then that will no longer work or will no longer perform how you 147 00:13:20,829 --> 00:13:24,600 expected to perform, so you need to take that into consideration. 148 00:13:24,600 --> 00:13:31,209 Once a version has been published its code cannot be changed, so going 149 00:13:31,209 --> 00:13:35,560 to the lambda console and trying to change that code, it just won't let 150 00:13:35,560 --> 00:13:40,569 you do that, and so what you need to do is that you need to select the latest 151 00:13:40,569 --> 00:13:45,040 version and that is your working version of that code you make your changes in 152 00:13:45,040 --> 00:13:48,910 that and then you can publish another version, but once a 153 00:13:48,910 --> 00:13:52,869 version has been published you cannot change it after. You have to just use 154 00:13:52,869 --> 00:13:59,949 your current version and make your changes to the current version. In order 155 00:13:59,949 --> 00:14:05,319 to overcome the problem of having these changing ARN every time we have a 156 00:14:05,319 --> 00:14:11,199 different version, what we can do is that we can create an alias, and an alias is a 157 00:14:11,199 --> 00:14:16,869 pointer to a specific version of a Lambda function and that alias will have 158 00:14:16,869 --> 00:14:23,920 its own unique ARN and so we can update an alias that was pointing to an old 159 00:14:23,920 --> 00:14:27,759 version and then we can get that to point to our new version and we're 160 00:14:27,759 --> 00:14:33,459 keeping that same ARN. So we can get our our function code tested and working 161 00:14:33,459 --> 00:14:39,639 properly and then we can use our alias to point to the new version and what we 162 00:14:39,639 --> 00:14:44,709 can do is we can shift traffic between those two versions based on a percentage 163 00:14:44,709 --> 00:14:49,119 of weights. So what we could do is that we could start off having only 10% of 164 00:14:49,119 --> 00:14:53,529 our traffic going to the new version and then slowly migrate that over to all of 165 00:14:53,529 --> 00:14:57,519 our traffic and that's going to reduce a headaches for us if there's any problems, 166 00:14:57,519 --> 00:15:01,209 we're not going to get a big mass of users that are going to have problems, 167 00:15:01,209 --> 00:15:07,029 and so those resource based policies and any events they can reference the alias 168 00:15:07,029 --> 00:15:12,549 ARN and so always our alias ARN will be pointing to that version of that 169 00:15:12,549 --> 00:15:17,350 Lambda function that we want as our production version. So we can 170 00:15:17,350 --> 00:15:21,670 do that in the Lambda console or we can use the command line interface in the 171 00:15:21,670 --> 00:15:26,850 command there is Lambda create - alias. 172 00:15:27,490 --> 00:15:33,500 Environment variables are version specific key value pairs for example you 173 00:15:33,500 --> 00:15:38,389 might have database name as a key and then the name of that database as a 174 00:15:38,389 --> 00:15:44,180 value for that key and what they do is they allow you to store secret securely 175 00:15:44,180 --> 00:15:48,980 and adjust your functions behavior without updating your code. So for 176 00:15:48,980 --> 00:15:53,180 example it might have your database connection details such as a username 177 00:15:53,180 --> 00:15:58,220 and password so you can update those without having to go into your code and 178 00:15:58,220 --> 00:16:03,380 updating your code. It's also very important for storing those secrets 179 00:16:03,380 --> 00:16:08,420 securely and outside of your functions code, it's far better to put them in 180 00:16:08,420 --> 00:16:13,490 environment variables rather than hard coding them in your code. You need to 181 00:16:13,490 --> 00:16:17,930 take into consideration that they cannot be changed after a version has been 182 00:16:17,930 --> 00:16:23,209 published so you need to use your your latest working version and then 183 00:16:23,209 --> 00:16:27,260 make those changes to those environment variables. So the way that we use 184 00:16:27,260 --> 00:16:31,579 environment variables is that we can go into the Lambda console, we select our 185 00:16:31,579 --> 00:16:37,029 function and we just do environment variables and add environment variable, 186 00:16:37,029 --> 00:16:42,230 we can also use the AWS command-line interface or one of the many software 187 00:16:42,230 --> 00:16:46,790 development kits and the command there is lambda and then update function 188 00:16:46,790 --> 00:16:51,529 configuration and then - - environment and then we will define the variables 189 00:16:51,529 --> 00:16:57,470 and the key and value of those variables. We can also use the serverless application 190 00:16:57,470 --> 00:17:04,429 model or cloud formation. In the properties of our function we need to 191 00:17:04,429 --> 00:17:12,710 define environment and then variables and then key and then the value. 192 00:17:12,710 --> 00:17:20,060 As we have learned from our discussion on the execution context the invoking of 193 00:17:20,060 --> 00:17:25,520 our Lambda function can be slowed down if a lot of packages and libraries need 194 00:17:25,520 --> 00:17:32,000 to be initialized beforehand and also our zip archive can be quite big 195 00:17:32,000 --> 00:17:36,440 if it includes these packages and libraries as well and so what we do is 196 00:17:36,440 --> 00:17:42,770 we define that outside of our our lambda function handler and so that is going to 197 00:17:42,770 --> 00:17:47,750 make sure that that can be reused later, and what we can also do is that we can 198 00:17:47,750 --> 00:17:52,370 take advantage of putting all of these libraries and packages into a zip 199 00:17:52,370 --> 00:18:00,040 archive and putting them into a layer and those layers can be reused by other 200 00:18:00,040 --> 00:18:07,040 Lambda functions and that allows us to keep that deployment package small. We 201 00:18:07,040 --> 00:18:12,710 can define up to five layers per function but the function code and all 202 00:18:12,710 --> 00:18:19,910 of these layers must still be less than 250 MB we can use the Lambda console to 203 00:18:19,910 --> 00:18:24,890 define a layer and upload that zip archive or we can use the AWS 204 00:18:24,890 --> 00:18:30,230 command-line interface and the command is lambda update function configuration 205 00:18:30,230 --> 00:18:37,550 and then the - - layers to define that layer. We can also use the serverless 206 00:18:37,550 --> 00:18:43,310 application model or cloud formation and define it now template resources and the 207 00:18:43,310 --> 00:18:50,300 resources will be libs and the type will be AWS serverless layer version and then we 208 00:18:50,300 --> 00:18:56,180 just need to define the content URI, which will be the location of that zip 209 00:18:56,180 --> 00:19:01,320 archive that contains all of these libraries and packages. 210 00:19:01,320 --> 00:19:08,790 The serverless application repository is a managed repository for Sam applications 211 00:19:08,790 --> 00:19:14,730 and it allows for us to share and reuse our applications and we can also search 212 00:19:14,730 --> 00:19:20,400 for applications that have been made and published for open use by other users as 213 00:19:20,400 --> 00:19:26,580 well and also by Amazon Web Services. The serverless application repository console 214 00:19:26,580 --> 00:19:34,230 can be used to publish any of our Sam applications or we can use the CLI Sam 215 00:19:34,230 --> 00:19:41,130 publish command which will also publish our application to the SAR. What we need 216 00:19:41,130 --> 00:19:47,130 to do is that we need to include a metadata section into our Sam template 217 00:19:47,130 --> 00:19:51,720 and that will include details such as our name, the name of the 218 00:19:51,720 --> 00:19:58,140 application, the description, the license for that application and the source code 219 00:19:58,140 --> 00:20:05,400 URL. If it's a github repository the source code URL for that as well. If we 220 00:20:05,400 --> 00:20:10,470 want to use an application that's on the serverless application repository we can 221 00:20:10,470 --> 00:20:15,120 simply go to the console and we can browse and search for that and we will 222 00:20:15,120 --> 00:20:18,720 find any application that we've put on there or others have put on there that 223 00:20:18,720 --> 00:20:23,750 are for public use or AWS has put on there. Once we've found one 224 00:20:23,750 --> 00:20:28,200 that we'd like to use, we can configure that. We can set our environment 225 00:20:28,200 --> 00:20:32,700 variables and any parameters for that, such as a size of our function and all 226 00:20:32,700 --> 00:20:37,050 that sort of thing and then we can deploy them as a Sam application 227 00:20:37,050 --> 00:20:42,500 and we can manage that in the Lambda management console. 228 00:20:46,090 --> 00:20:52,220 There are a number of different ways of invoking a lambda function. If we want to 229 00:20:52,220 --> 00:20:57,950 invoke a lambda function for testing we can simply use the lambda console after 230 00:20:57,950 --> 00:21:01,640 we've created that function and we can simply test and that will invoke that 231 00:21:01,640 --> 00:21:06,860 lambda function and it will send a test event to that Lambda function or we 232 00:21:06,860 --> 00:21:10,970 can use one of the many software development kits or the command-line 233 00:21:10,970 --> 00:21:17,390 interface. There are two different types of invocation the first one being 234 00:21:17,390 --> 00:21:22,400 synchronous invocation which is a standard one and that is where we wait 235 00:21:22,400 --> 00:21:27,770 for that function to process the event that we have sent to it and then we're 236 00:21:27,770 --> 00:21:32,810 going to wait for that Lambda function to return a response. The other way that 237 00:21:32,810 --> 00:21:37,970 we can invoke a Lambda function is with as synchronous invocation and in that 238 00:21:37,970 --> 00:21:44,000 situation the Lambda service will queue the event for processing and it 239 00:21:44,000 --> 00:21:49,430 will return a response immediately before that Lambda function 240 00:21:49,430 --> 00:21:54,350 has even been invoked and so we don't have to wait around for that 241 00:21:54,350 --> 00:21:58,430 response we're going to get a response straight away if we would like to take 242 00:21:58,430 --> 00:22:05,030 advantage of a synchronous invocation we need to have an invocation type as event. 243 00:22:05,030 --> 00:22:11,350 So events they can invoke a function directly for example you may have an s3 244 00:22:11,350 --> 00:22:17,240 event on an Amazon s3 bucket, so when an object is uploaded to 245 00:22:17,240 --> 00:22:25,790 that bucket an s3 event will trigger and directly invoke a Lambda function or 246 00:22:25,790 --> 00:22:31,280 we can have the Lambda service itself can read an event source so that could 247 00:22:31,280 --> 00:22:36,950 be a stream it could be a Kinesis stream or a dynamo DB stream or it could be an 248 00:22:36,950 --> 00:22:44,480 sqs queue. So the Lambda service will read that and when it needs to it will invoke 249 00:22:44,480 --> 00:22:51,650 a function and so that is called Lambda event source mapping. So here we can see 250 00:22:51,650 --> 00:22:56,660 we've got synchronous invocation and in that way we've got our clients they 251 00:22:56,660 --> 00:23:00,200 could be a resource it could be an iam user 252 00:23:00,200 --> 00:23:06,679 whatever, it is has created an event and that has invoked a Lambda function and 253 00:23:06,679 --> 00:23:12,559 those clients are waiting for a response to come back. So here we can see we've 254 00:23:12,559 --> 00:23:17,960 got a synchronous invocation and so our clients are going to create an event and 255 00:23:17,960 --> 00:23:23,840 that is going to go into an event queue and as soon as it goes into an event 256 00:23:23,840 --> 00:23:30,679 queue so immediately the Lambda service will send a response back and then the 257 00:23:30,679 --> 00:23:38,029 Lambda service will then invoke a Lambda function and pass the event details from 258 00:23:38,029 --> 00:23:44,120 that event queue and finally we've got event source mapping and so here is 259 00:23:44,120 --> 00:23:48,889 showing event source mapping with a Kinesis stream. So we've got our 260 00:23:48,889 --> 00:23:53,929 user or resource or whatever that is creating records that are going into 261 00:23:53,929 --> 00:24:00,200 a Kinesis data stream, so the Lambda service will be doing event source 262 00:24:00,200 --> 00:24:06,320 mapping, it will be reading that Kinesis data stream and based on what's in that 263 00:24:06,320 --> 00:24:12,289 Kinesis data stream, it will be invoking a lambda function and sending their 264 00:24:12,289 --> 00:24:19,580 event as a batch to be processed by that Lambda function. If it fails to invoke 265 00:24:19,580 --> 00:24:25,039 that Lambda function or the event fails to be processed then the event will go 266 00:24:25,039 --> 00:24:29,950 into a failed event destination queue. 267 00:24:30,559 --> 00:24:36,019 Services that invoke Lambda functions synchronously include the API gateway, 268 00:24:36,019 --> 00:24:43,139 Lambda@Edge, Kinesis data firehose but as opposed to the Kinesis streams, data 269 00:24:43,139 --> 00:24:47,909 firehose will be synchronously invoked, step functions will invoke synchronously 270 00:24:47,909 --> 00:24:53,700 and application load balancer. Services that invoke asynchronously include 271 00:24:53,700 --> 00:25:01,350 Amazon s3, SNS, SES, cloud formation, cloud watch logs & events and also 272 00:25:01,350 --> 00:25:08,789 code commit and code pipeline. Services that Lambda reads events from by using 273 00:25:08,789 --> 00:25:13,769 event source mapping and that is the streams and queue. So we've got Kinesis 274 00:25:13,769 --> 00:25:18,779 streams, dynamodb streams and SQS queues. So Lambda 275 00:25:18,779 --> 00:25:25,710 will read those and the Lambda service will invoke those Lambda functions based 276 00:25:25,710 --> 00:25:30,230 on what's in those streams or in that queue. 277 00:25:33,620 --> 00:25:41,309 By default a Lambda function cannot connect to any resources inside of a 278 00:25:41,309 --> 00:25:49,950 private subnet in your VPC and the reason is because your VPC is your own 279 00:25:49,950 --> 00:25:55,260 private space within the cloud and the Lambda service is not going to operate 280 00:25:55,260 --> 00:26:03,510 inside of your private space in the cloud, it operates in the AWS VPC and so 281 00:26:03,510 --> 00:26:10,289 what we need to do is that we need to associate that VPC with our lambda 282 00:26:10,289 --> 00:26:17,340 function and then the lambda service will create network interfaces within 283 00:26:17,340 --> 00:26:24,419 our private subnets that will allow those Lambda functions to connect to 284 00:26:24,419 --> 00:26:31,140 private subnets or resources within a private subnet. Now because network 285 00:26:31,140 --> 00:26:36,419 interfaces are being created it requires a role so our lambda service will 286 00:26:36,419 --> 00:26:43,380 require an execution role to create network interfaces. So the way that we do 287 00:26:43,380 --> 00:26:50,130 that is that in the Lambda console we associate a VPC the subnets and also 288 00:26:50,130 --> 00:26:54,419 the security groups with the Lambda function. So we go into the Lambda 289 00:26:54,419 --> 00:27:01,250 console we edit that Lambda function we scroll down into VPC and we define 290 00:27:01,250 --> 00:27:07,350 custom VPC. Once that's done and it takes a little bit of time, the Lambda 291 00:27:07,350 --> 00:27:13,530 service will go through and will create a network interface to 292 00:27:13,530 --> 00:27:19,020 allow access within that private subnet. We can also use the command line 293 00:27:19,020 --> 00:27:23,220 interface or one of the many software development kits and the command there 294 00:27:23,220 --> 00:27:29,159 is Lambda create function or lambda update function configuration if it's an 295 00:27:29,159 --> 00:27:34,830 existing lambda function and then we just need to define with a VPC config 296 00:27:34,830 --> 00:27:42,179 the subnet IDs of our private subnets and any security group IDs as well. We 297 00:27:42,179 --> 00:27:47,039 can also do that with Cloudformation or the serverless application model 298 00:27:47,039 --> 00:27:52,619 and we just go into the function properties and we define VPC config and 299 00:27:52,619 --> 00:27:58,139 again we define those subnet IDs for the private subnets and these security 300 00:27:58,139 --> 00:28:03,749 groups IDs as well. Okay so here we can see how it all works so we've got the 301 00:28:03,749 --> 00:28:12,239 AWS cloud and we've got our customer owned our VPC and AWS that they have 302 00:28:12,239 --> 00:28:17,700 their AWS lambda VPC so we don't own that VPC, that's not our VPC, our 303 00:28:17,700 --> 00:28:23,249 customer, our VPC is on the right hand side and the AWS VPC there is on the 304 00:28:23,249 --> 00:28:28,559 left hand side and so our lambda functions are located inside of that AWS 305 00:28:28,559 --> 00:28:38,429 at VPC so AWS so we'll create a VPC to VPC net instance or net service and so 306 00:28:38,429 --> 00:28:43,559 that will connect to an elastic network interface that the Lambda service will 307 00:28:43,559 --> 00:28:49,470 create inside of our VPC and so that will allow our Lambda functions to 308 00:28:49,470 --> 00:28:54,979 access resources inside of those private subnets 309 00:28:54,979 --> 00:29:02,609 by default your Lambda function also cannot connect to the internet so what 310 00:29:02,609 --> 00:29:08,940 we need to do is that we need to have a VPC with a public subnet and being a 311 00:29:08,940 --> 00:29:13,649 public subject it's going to have a route to an Internet gateway and we need 312 00:29:13,649 --> 00:29:20,039 to allow our Lambda function to access the internet through that public subnet 313 00:29:20,039 --> 00:29:26,909 and it does that again by creating a network interface inside of that public 314 00:29:26,909 --> 00:29:32,909 subnet, so we require our lambda service to have a execution role to create 315 00:29:32,909 --> 00:29:37,950 network interfaces, so when we're creating our function we need to define 316 00:29:37,950 --> 00:29:42,179 that execution role that will have those permissions to create those network 317 00:29:42,179 --> 00:29:47,729 interfaces then we have a public subnet and then our public subnet must 318 00:29:47,729 --> 00:29:54,450 also have a NAT gateway or a NAT instance inside of it and that will 319 00:29:54,450 --> 00:29:59,970 allow their outbound traffic once we've done that we can associate 320 00:29:59,970 --> 00:30:05,039 that VP C and that public subnet to that function in a similar way that we 321 00:30:05,039 --> 00:30:10,830 associate our private subnets as well and we do that again using the Lambda 322 00:30:10,830 --> 00:30:17,549 console using the CLS DK's or also using cloud formation or the service 323 00:30:17,549 --> 00:30:20,690 application model 324 00:30:21,220 --> 00:30:27,980 as we know from our discussion on the execution context database connections 325 00:30:27,980 --> 00:30:34,310 that need to be initialized can increase our functions invocation latency which 326 00:30:34,310 --> 00:30:38,780 is not a good thing so what we need to do is that we need to make sure that our 327 00:30:38,780 --> 00:30:44,750 our connections are defined outside of our function handler and that will allow 328 00:30:44,750 --> 00:30:50,780 those for being reused after our function is to terminate it now even 329 00:30:50,780 --> 00:30:56,360 better than that we can use the Amazon RDS proxy service if we're using RDS 330 00:30:56,360 --> 00:31:01,930 instances or the RDS service and what that will do is it will maintain 331 00:31:01,930 --> 00:31:06,860 connections for connections to that RDS service and it will have a single 332 00:31:06,860 --> 00:31:13,070 endpoint for that so we need to have an execution role for our lambda function 333 00:31:13,070 --> 00:31:19,520 that must have access to that database instance or the RDS proxy service that 334 00:31:19,520 --> 00:31:24,260 we've set up and so that execution role can be used 335 00:31:24,260 --> 00:31:29,720 for authenticating with the RDS proxy service instead of having to use a 336 00:31:29,720 --> 00:31:36,470 username and password and so that again is going to reduce our latency we can 337 00:31:36,470 --> 00:31:41,180 just jump straight in with that execution role and start using that that 338 00:31:41,180 --> 00:31:46,460 database immediately without having to go through and supplying that username 339 00:31:46,460 --> 00:31:49,390 and password 340 00:31:49,540 --> 00:31:55,010 if we have a large amount of files that need to be accessed by our Lambda 341 00:31:55,010 --> 00:32:02,750 functions we can connect our lambda functions to the elastic file system and 342 00:32:02,750 --> 00:32:10,250 the EFS file system can be mounted to our functions local directory and it can 343 00:32:10,250 --> 00:32:15,620 use that in the same way that it uses its temporary folder but it will use it 344 00:32:15,620 --> 00:32:20,660 as a permanent storage and that permanent storage will also be used and 345 00:32:20,660 --> 00:32:25,010 available for all of your functions so the way that we do that is that we can 346 00:32:25,010 --> 00:32:31,760 use the Lambda console and then add file system or select our EFS volume and then 347 00:32:31,760 --> 00:32:36,740 we'll define the local mount path and that will start with MNT and then you 348 00:32:36,740 --> 00:32:41,570 will define that that local mount path what that will be and so from there your 349 00:32:41,570 --> 00:32:48,260 Lambda function can access those those files through that local mount path or 350 00:32:48,260 --> 00:32:52,340 we can use a CLI or the software development kits and the command there 351 00:32:52,340 --> 00:32:57,500 is Lambda create function for a new function or Lambda update function 352 00:32:57,500 --> 00:33:03,320 configuration for an existing function and then we simply do the file system 353 00:33:03,320 --> 00:33:08,900 configs and define that elastic filesystem in there or we can use cloud 354 00:33:08,900 --> 00:33:14,450 formation or this service application model and defining our template for our 355 00:33:14,450 --> 00:33:20,240 functions properties define file assistant configs and then again define 356 00:33:20,240 --> 00:33:24,010 that elastic filesystem 357 00:33:28,210 --> 00:33:34,610 concurrency is that number of requests that your function is serving at a given 358 00:33:34,610 --> 00:33:39,950 time and that is subject to a regional limit that is shared by all of your 359 00:33:39,950 --> 00:33:47,000 functions so it's not possible to have a zillion concurrent invocations of your 360 00:33:47,000 --> 00:33:53,030 functions it's subject to a regional or limit what you can do if you have got 361 00:33:53,030 --> 00:33:58,160 critical functions that you want to make sure have a high level of concurrency 362 00:33:58,160 --> 00:34:03,620 available you can define a reserved concurrency and that will ensure that 363 00:34:03,620 --> 00:34:10,070 that function can always reach a certain level of concurrency but your other 364 00:34:10,070 --> 00:34:15,679 functions they cannot use that reserve concurrency that you've allocated to 365 00:34:15,679 --> 00:34:22,700 that function if you find that you that you have a certain time period that you 366 00:34:22,700 --> 00:34:26,929 want to provision a certain amount of concurrency for then you can use 367 00:34:26,929 --> 00:34:33,080 provisioned concurrency and that allows you to allocate a level of concurrency 368 00:34:33,080 --> 00:34:39,560 before they increase in invitations from an increase in demand occurs and that 369 00:34:39,560 --> 00:34:46,250 will ensure that those requests are served by initialized instances so it 370 00:34:46,250 --> 00:34:50,450 will not only make sure that the concurrency is available it will make 371 00:34:50,450 --> 00:34:56,750 sure that those instances are initialized and ready to go we can also 372 00:34:56,750 --> 00:35:03,260 use the AWS application auto scaling service to manage this provision 373 00:35:03,260 --> 00:35:08,840 concurrency so making sure that we have initialized instances available and we 374 00:35:08,840 --> 00:35:14,230 can do that based on a schedule or we can do that based on demand based on 375 00:35:14,230 --> 00:35:20,360 utilization in the same way that we do with it with a ec2 auto scaling group we 376 00:35:20,360 --> 00:35:26,050 can have this we can have this provision concern see of these provisioned 377 00:35:26,050 --> 00:35:33,560 instances or in vacations auto scaling automatically for us but of course you 378 00:35:33,560 --> 00:35:38,390 also need to take into consideration that it may be cheaper 379 00:35:38,390 --> 00:35:44,539 to use an auto scaling ec2 group rather than using Lambda because you're paying 380 00:35:44,539 --> 00:35:49,400 for the convenience of a service service so make sure that you you do your 381 00:35:49,400 --> 00:35:54,589 testing and make sure that there's no cost to you of using this this great 382 00:35:54,589 --> 00:35:58,940 service of having these great out-of-the-box scaling in and out using 383 00:35:58,940 --> 00:36:06,859 the application auto scaling when you create a Lambda function and you define 384 00:36:06,859 --> 00:36:12,890 its configuration the atlandis service will allocate the CPU power of that 385 00:36:12,890 --> 00:36:18,859 function linearly in proportion to the amount of memory that you have 386 00:36:18,859 --> 00:36:26,269 configured what that means is that you cannot directly change the CPU power of 387 00:36:26,269 --> 00:36:32,660 a Lambda function what you can do is that you can increase the memory of that 388 00:36:32,660 --> 00:36:41,839 function significantly and that will increase the CPU power at 1792 and make 389 00:36:41,839 --> 00:36:48,859 a function has equivalent of one 4v CPU so if you double the memory then that 390 00:36:48,859 --> 00:36:57,259 will allow for 2 4 V CPUs so setting the memory for a lambda function implicitly 391 00:36:57,259 --> 00:37:04,359 sets not only the CPU power but also the network power or the network 392 00:37:04,359 --> 00:37:11,420 connectivity and also some other resources as well so memory is your main 393 00:37:11,420 --> 00:37:16,250 thing that you define if you're finding that your function cannot handle your 394 00:37:16,250 --> 00:37:21,069 code increase the memory of that function 395 00:37:21,349 --> 00:37:27,839 land of functions can be used as a target for an application low balancer 396 00:37:27,839 --> 00:37:33,390 and that has a number of significant advantages it allows requests to be 397 00:37:33,390 --> 00:37:39,599 received from the internet by the application low balances single endpoint 398 00:37:39,599 --> 00:37:47,280 it also allows you to create load balancer rules to route those HTTP 399 00:37:47,280 --> 00:37:55,410 requests to a specific function based upon a path or based upon values inside 400 00:37:55,410 --> 00:38:00,569 of the header of that request and that is very important because you could have 401 00:38:00,569 --> 00:38:07,049 a Lambda function for for e-commerce a Lambda function for image processing and 402 00:38:07,049 --> 00:38:12,690 you can define the exact resources that you want to do that you could also have 403 00:38:12,690 --> 00:38:18,599 those critical functions having a higher level of concurrency available to other 404 00:38:18,599 --> 00:38:22,950 functions that aren't so critical and making sure that those requests go to 405 00:38:22,950 --> 00:38:28,890 those right locations of those critical those critical functions now the alb it 406 00:38:28,890 --> 00:38:33,569 invokes those functions synchronously with an event containing the request 407 00:38:33,569 --> 00:38:38,480 body and any metadata with that request 408 00:38:42,289 --> 00:38:48,299 by default land that monitors your functions and sends metrics for those 409 00:38:48,299 --> 00:38:53,339 functions to the cloud watts service and you can view those those monitoring 410 00:38:53,339 --> 00:38:59,430 graphs of those metrics in the Lambda console okay in the dashboard here we've 411 00:38:59,430 --> 00:39:04,469 got we can see the invitations that will be the number of invitations at that 412 00:39:04,469 --> 00:39:08,279 point in time so if there's 20 invitations of a function that will show 413 00:39:08,279 --> 00:39:13,799 up there the duration that the invitation lasted for the error counts 414 00:39:13,799 --> 00:39:19,259 and the success rate the number of times that the the function needed to be 415 00:39:19,259 --> 00:39:24,749 throttled due to too many requests we also have their iterator age what that 416 00:39:24,749 --> 00:39:31,319 is is that when we're using Lambda service with a stream or with a s qsq it 417 00:39:31,319 --> 00:39:36,479 will be the age from when that record entered the stream to when the Lambda 418 00:39:36,479 --> 00:39:41,819 function was in vote and so that more vary depending hobb on how big that 419 00:39:41,819 --> 00:39:47,329 stream is and qyz we also have dead letter errors and so those will be 420 00:39:47,329 --> 00:39:51,509 errors or there will be requests that could not be served by the Lambda 421 00:39:51,509 --> 00:39:56,239 function and they will go into a dead letter Q 422 00:39:56,670 --> 00:40:02,020 the metrics that are sent to the cloud service after your function has 423 00:40:02,020 --> 00:40:08,250 processed an event can be divided into three main types being invocation 424 00:40:08,250 --> 00:40:15,010 performance or concurrency metrics the first one there is invocations a number 425 00:40:15,010 --> 00:40:19,750 of in the invocations of our function that we have at that point in time the 426 00:40:19,750 --> 00:40:25,330 errors at that point in time the number of times that the function needed to be 427 00:40:25,330 --> 00:40:30,280 throttled we have the provision concurrency implications how many of 428 00:40:30,280 --> 00:40:35,580 those vacations are on provision concurrency how many of those are from 429 00:40:35,580 --> 00:40:42,010 provision concurrency spillover into standard invitations also we have as 430 00:40:42,010 --> 00:40:47,620 synchronous ones as purely for synchronous invitations and they are the 431 00:40:47,620 --> 00:40:53,800 dead letter error so where a event failed to be delivered and also it 432 00:40:53,800 --> 00:40:58,900 failed to enter the dead letter queue as well and also we have destination 433 00:40:58,900 --> 00:41:04,090 delivery failures where that event failed to be delivered as well we also 434 00:41:04,090 --> 00:41:09,040 have there the performance metrics being the duration of that of that invocation 435 00:41:09,040 --> 00:41:15,790 and the iterator age so where we're using a stream or a queue the age of 436 00:41:15,790 --> 00:41:21,070 that record from when it entered that stream to win the Lambda service invoke 437 00:41:21,070 --> 00:41:26,890 that function and the concurrency metrics so the concurrent executions of 438 00:41:26,890 --> 00:41:31,560 that function the provision concurrent execution to how many of those are 439 00:41:31,560 --> 00:41:36,610 provisioned concurrency and the provision concurrency utilization how 440 00:41:36,610 --> 00:41:41,580 much are we utilizing of that provision concurrency and our unreserved 441 00:41:41,580 --> 00:41:46,480 concurrency execution so how many of those are operating on unreserved to 442 00:41:46,480 --> 00:41:53,390 concurrency when you create a lambda function the 443 00:41:53,390 --> 00:42:00,109 Lambda service automatically creates a cloud watch log group for that that 444 00:42:00,109 --> 00:42:06,859 Lambda function and whenever that Lambda function is invoked it will create a log 445 00:42:06,859 --> 00:42:13,069 stream for each of those indications of that Lambda function and when that 446 00:42:13,069 --> 00:42:18,980 Lambda function is invoked its runtime will send details about each of those 447 00:42:18,980 --> 00:42:25,640 invitations to that specific logger stream now we can also take advantage of 448 00:42:25,640 --> 00:42:31,250 this cloud watch log in this log stream by sending details about what's 449 00:42:31,250 --> 00:42:36,140 occurring in our code so we can send a command they can send a message to that 450 00:42:36,140 --> 00:42:42,170 log stream the way that we do that is at where function code that outputs to the 451 00:42:42,170 --> 00:42:48,920 console will output to that quell our watch logs because our lambda service 452 00:42:48,920 --> 00:42:54,410 doesn't have a console scream so it outputs that directly to our cloud watch 453 00:42:54,410 --> 00:42:59,299 log so the way that we do that is we use the console dot log command in nodejs 454 00:42:59,299 --> 00:43:05,180 or the print command in python and that will send that that message 455 00:43:05,180 --> 00:43:09,500 whatever is in that console.log command or that print command will go 456 00:43:09,500 --> 00:43:14,079 directly to that log stream. 457 00:43:15,190 --> 00:43:20,630 If we want to monitor access to the Lambda service and our Lambda functions 458 00:43:20,630 --> 00:43:27,020 we can use the cloudtrail service to do that and cloudtrail will capture all of 459 00:43:27,020 --> 00:43:32,780 the API calls made to Lambda from the console from the command line interface 460 00:43:32,780 --> 00:43:37,550 or from one of the many software development kits and it will store that 461 00:43:37,550 --> 00:43:42,530 in a cloudtrail log and that will in each one of those entries will have the 462 00:43:42,530 --> 00:43:49,430 IP address of that user it will have the IAM user or the role that was used to 463 00:43:49,430 --> 00:43:56,600 access Lambda and also the timestamp of that API call. So if a trail is enabled 464 00:43:56,600 --> 00:44:02,780 then all of the events will be saved to Amazon s3. If you haven't enabled a trail 465 00:44:02,780 --> 00:44:09,680 you can still see the most recent events in the cloud trail console log event 466 00:44:09,680 --> 00:44:14,840 history. So that brings us to the end of a pretty detailed discussion on the 467 00:44:14,840 --> 00:44:19,640 Lambda service and it's a great service and a really fun one to get your hands on. So 468 00:44:19,640 --> 00:44:23,510 coming up next we're going to have some labs on Lambda and I hope you enjoy 469 00:44:23,510 --> 00:44:28,390 those and I look forward to seeing you in the next ones.