update,
This commit is contained in:
366
tunmnlu/task_3/Skeleton/Q3/q3.ipynb
Normal file
366
tunmnlu/task_3/Skeleton/Q3/q3.ipynb
Normal file
@@ -0,0 +1,366 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# HW3 - Q3 [35 pts]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Important Notices\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: <strong>REMOVE</strong> any print statements added to cells with \"#export\" that are used for debugging purposes befrore submitting because they will crash the autograder in Gradescope. Any additional cells can be used for testing purposes at the bottom. \n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: Do <strong>NOT</strong> remove any comment that says \"#export\" because that will crash the autograder in Gradescope. We use this comment to export your code in these cells for grading.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: Do <strong>NOT</strong> import any additional libraries into this workbook.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"All instructions, code comments, etc. in this notebook **are part of the assignment instructions**. That is, if there is instructions about completing a task in this notebook, that task is not optional. \n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
" You <strong>must</strong> implement the following functions in this notebook to receive credit.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"`user()`\n",
|
||||
"\n",
|
||||
"`long_trips()`\n",
|
||||
"\n",
|
||||
"`manhattan_trips()`\n",
|
||||
"\n",
|
||||
"`weighted_profit()`\n",
|
||||
"\n",
|
||||
"`final_output()`\n",
|
||||
"\n",
|
||||
"Each method will be auto-graded using different sets of parameters or data, to ensure that values are not hard-coded. You may assume we will only use your code to work with data from the NYC-TLC dataset during auto-grading.\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: Do <strong>NOT</strong> remove or modify the following utility functions:\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"`load_data()`\n",
|
||||
"\n",
|
||||
"`main()`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
" Do <strong>not</strong> change the below cell. Run it to initialize your PySpark instance. If you don't get any output, make sure your Notebook's Kernel is set to \"PySpark\" in the top right corner.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: Do <strong>NOT</strong> remodify the below cell. It contains the function for loading data and all imports, and the function for running your code.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"from pyspark.sql.functions import *\n",
|
||||
"from pyspark.sql import *"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#### DO NOT CHANGE ANYTHING IN THIS CELL ####\n",
|
||||
"\n",
|
||||
"def load_data(size='small'):\n",
|
||||
" # Loads the data for this question. Do not change this function.\n",
|
||||
" # This function should only be called with the parameter 'small' or 'large'\n",
|
||||
" \n",
|
||||
" if size != 'small' and size != 'large':\n",
|
||||
" print(\"Invalid size parameter provided. Use only 'small' or 'large'.\")\n",
|
||||
" return\n",
|
||||
" \n",
|
||||
" input_bucket = \"s3://cse6242-hw3-q3\"\n",
|
||||
" \n",
|
||||
" # Load Trip Data\n",
|
||||
" trip_path = '/'+size+'/yellow_tripdata*'\n",
|
||||
" trips = spark.read.csv(input_bucket + trip_path, header=True, inferSchema=True)\n",
|
||||
" print(\"Trip Count: \",trips.count()) # Prints # of trips (# of records, as each record is one trip)\n",
|
||||
" \n",
|
||||
" # Load Lookup Data\n",
|
||||
" lookup_path = '/'+size+'/taxi*'\n",
|
||||
" lookup = spark.read.csv(input_bucket + lookup_path, header=True, inferSchema=True)\n",
|
||||
" \n",
|
||||
" return trips, lookup\n",
|
||||
"\n",
|
||||
"def main(size, bucket):\n",
|
||||
" # Runs your functions implemented above.\n",
|
||||
" \n",
|
||||
" print(user())\n",
|
||||
" trips, lookup = load_data(size=size)\n",
|
||||
" trips = long_trips(trips)\n",
|
||||
" mtrips = manhattan_trips(trips, lookup)\n",
|
||||
" wp = weighted_profit(trips, mtrips)\n",
|
||||
" final = final_output(wp, lookup)\n",
|
||||
" \n",
|
||||
" # Outputs the results for you to visually see\n",
|
||||
" final.show()\n",
|
||||
" \n",
|
||||
" # Writes out as a CSV to your bucket.\n",
|
||||
" final.write.csv(bucket)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Implement the below functions for this assignment:\n",
|
||||
"<div class=\"alert alert-block alert-danger\">\n",
|
||||
" WARNING: Do <strong>NOT</strong> change any function inputs or outputs, and ensure that the dataframes your code returns align with the schema definitions commented in each function. Do <strong>NOT</strong> remove the #export comment from each of the code blocks either. This can prevent your code from being converted to a python file.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3a. [1 pt] Update the `user()` function\n",
|
||||
"This function should return your GT username, eg: gburdell3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"def user():\n",
|
||||
" # Returns a string consisting of your GT username.\n",
|
||||
" return 'tlou31'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3b. [2 pts] Update the `long_trips()` function\n",
|
||||
"This function filters trips to keep only trips greater than or equal to 2 miles."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"def long_trips(trips):\n",
|
||||
" # Returns a Dataframe (trips) with Schema the same as :trips:\n",
|
||||
" return trips.filter(trips.trip_distance >= 2)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3c. [6 pts] Update the `manhattan_trips()` function\n",
|
||||
"\n",
|
||||
"This function determines the top 20 locations with a `DOLocationID` in manhattan by passenger_count (pcount).\n",
|
||||
"\n",
|
||||
"Example output formatting:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"+--------------+--------+\n",
|
||||
"| DOLocationID | pcount |\n",
|
||||
"+--------------+--------+\n",
|
||||
"| 5| 15|\n",
|
||||
"| 16| 12| \n",
|
||||
"+--------------+--------+\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"def manhattan_trips(trips, lookup):\n",
|
||||
" # Returns a Dataframe (mtrips) with Schema: DOLocationID, pcount\n",
|
||||
"\n",
|
||||
" # - This function determines the top 20 locations with a DOLocationID in Manhattan by sum of passenger count.\n",
|
||||
" # - Returns a PySpark DataFrame with the schema (DOLocationID, pcount)\n",
|
||||
" \n",
|
||||
" trip_lookup = trips.join(lookup,col(\"DOLocationID\") == col(\"LocationID\")).filter(col(\"Borough\") == \"Manhattan\")\n",
|
||||
" df = trip_lookup.groupBy(col(\"DOLocationID\")).agg({\"passenger_count\":\"sum\"}).withColumn('pcount',col(\"sum(passenger_count)\"))\n",
|
||||
" manhattan_trip_df = df.select([\"DOLocationID\",\"pcount\"]).orderBy(col(\"pcount\"),ascending =False).limit(20)\n",
|
||||
" \n",
|
||||
" return manhattan_trip_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3d. [6 pts] Update the `weighted_profit()` function\n",
|
||||
"This function should determine the average `total_amount`, the total count of trips, and the total count of trips ending in the top 20 destinations and return the `weighted_profit` as discussed in the homework document.\n",
|
||||
"\n",
|
||||
"Example output formatting:\n",
|
||||
"```\n",
|
||||
"+--------------+-------------------+\n",
|
||||
"| PULocationID | weighted_profit |\n",
|
||||
"+--------------+-------------------+\n",
|
||||
"| 18| 33.784444421924436| \n",
|
||||
"| 12| 21.124577637149223| \n",
|
||||
"+--------------+-------------------+\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"def weighted_profit(trips, mtrips): \n",
|
||||
" # Returns a Dataframe (wp) with Schema: PULocationID, weighted_profit\n",
|
||||
" # Note: Use decimal datatype for weighted profit (NOTE: DON'T USE FLOAT)\n",
|
||||
" # Our grader will be only be checking the first 8 characters for each value in the dataframe\n",
|
||||
" \n",
|
||||
" # i. the average total_amount,\n",
|
||||
" # ii. the total count of trips, and\n",
|
||||
" # iii. the total count of trips ending in the top 20 destinations and return the weighted_profit as discussed earlier in the homework document.\n",
|
||||
" # iv. Returns a PySpark DataFrame with the schema (PULocationID, weighted_profit) for the weighted_profit as discussed earlier in this homework document.\n",
|
||||
" \n",
|
||||
" # Calculate avg(total_amount) and total count of trips\n",
|
||||
" PU_data = trips.groupBy('PULocationID').agg({'total_amount':'avg', 'VendorID': 'count'}) \\\n",
|
||||
" .withColumnRenamed('avg(total_amount)', 'avg_cost') \\\n",
|
||||
" .withColumnRenamed('count(VendorID)', 'total_trips')\n",
|
||||
" \n",
|
||||
" # Join trips with mtrips to on DOLocationID and summarize \n",
|
||||
" mnhtn_DO = trips.join(mtrips, trips.DOLocationID == mtrips.DOLocationID)\n",
|
||||
" DO_data = mnhtn_DO.groupBy('PULocationID').agg({'total_amount': 'count'}) \\\n",
|
||||
" .withColumnRenamed('count(total_amount)', 'total_trips_topDO')\n",
|
||||
" \n",
|
||||
" # Left join DO_data to PU_data\n",
|
||||
" df_join = PU_data.join(DO_data, PU_data.PULocationID == DO_data.PULocationID, how = 'left')\\\n",
|
||||
" .select(PU_data.PULocationID, PU_data.avg_cost, PU_data.total_trips, DO_data.total_trips_topDO)\n",
|
||||
" df_join = df_join.fillna(0) \n",
|
||||
" \n",
|
||||
" # Add weighted profit column\n",
|
||||
" df_join = df_join.withColumn('weighted_profit', (col(\"total_trips_topDO\") / col(\"total_trips\")) * col(\"avg_cost\")) \n",
|
||||
" \n",
|
||||
" return df_join.select('PULocationID', 'weighted_profit')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3e. [5 pts] Update the `final_output()` function\n",
|
||||
"This function will take the results of `weighted_profit`, links it to the `borough` and `zone` and returns the top 20 locations with the highest `weighted_profit`.\n",
|
||||
"\n",
|
||||
"Example output formatting:\n",
|
||||
"```\n",
|
||||
"+------------+---------+-------------------+\n",
|
||||
"| Zone | Borough | weighted_profit |\n",
|
||||
"+----------------------+-------------------+\n",
|
||||
"| JFK Airport| Queens| 16.95897820117925|\n",
|
||||
"| Jamaica| Queens| 14.879835188762488|\n",
|
||||
"+------------+---------+-------------------+\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#export\n",
|
||||
"def final_output(wp, lookup): \n",
|
||||
" # Returns a Dataframe (final) with Schema: Zone, Borough, weighted_profit\n",
|
||||
" # Note: Use decimal datatype for weighted profit (NOTE: DON'T USE FLOAT)\n",
|
||||
" # Our grader will be only be checking the first 8 characters for each value in the dataframe\n",
|
||||
"\n",
|
||||
" # This function\n",
|
||||
" # - takes the results of weighted_profit,\n",
|
||||
" # - links it to the borough and zone through the lookup data frame, and\n",
|
||||
" # - returns the top 20 locations with the highest weighted_profit.\n",
|
||||
" # - Returns a PySpark DataFrame with the schema (Zone, Borough, weighted_profit)\n",
|
||||
" df = wp.join(lookup, wp.PULocationID == lookup.LocationID).select(\"Zone\",\"Borough\",\"weighted_profit\")\n",
|
||||
" df = df.orderBy(\"weighted_profit\", ascending=False).limit(20)\n",
|
||||
" \n",
|
||||
" return df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Testing\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
" You may use the below cell for any additional testing you need to do, however any code implemented below will not be run or used when grading.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
Reference in New Issue
Block a user