Description

Lecture 11 – Interpolation, Curve Fitting, and Extrapolation

Step 1 – In your MATLAB Workspace, load Tesla stock price data using this command

load TSLA_stock_history_Mar_2022_2023.mat

NOTE 1: Variable “Stock_Date” is the number days since 7 March 2022

NOTE 2: Date numbers don’t increment uniformly – market isn’t open every day

NOTE 3: Variable “Stock_Price” is the rounded closing stock price

Step 2 – In the MATLAB “Curve Fitter” app, use the “Select Data” button and load in “Stock_Date” as the “X value” and “Stock_Price” as the “Y value”

Step 3 – Make “FIT TYPE” (see figure immediately below) “Interpolant”

Step 4 – “Export” by selecting “Generate Code” (as show in figure above on right side) to create a function called “createFit” that you paste into your script and call using this line:

[fitresult, gof] = createFit(Stock_Dates, Stock_Price)

a. Interpolation: Compare and show “fitresult” and “Stock_Price” values for days 1, 100, and 361 are identical – goal is to show interpolation result passes through the values being interpolated. Show that you can compute a price for day numbers 275.5 and 351.1 – the goal is to show you can now compute values at any date. Hint – use command “fitresult([1 100 361])” to get the fitresult values where the numerical values are the days, not indices

[deliverables: table of “fitresult” values, “Stock_Price” values, and 2 sentences explaining the different prices]

b. Curve Fit – Use “polyfit” command to generate polynomial fits to the data for N = 3, 5, & 7 (type “help polyfit” in the command window for information for what N means) for the same 5 dates provided in part a. Here is an example line of code where N=7:

p7 = polyfit(Stock_Dates, Stock_Price, 7)

price_p7 = polyval(p7,[1 100 361])

[deliverables: table of “Stock_Price” values versus the 3 polynomial fits (as shown below) and 2 sentences explaining the different prices].

c. Extrapolate – compare part a. interpolation vs part b. polyfit solutions for these dates: 380, 400, & 410.

[deliverables: plot all Stock_Price, interpolation, and 3 polynomial curve fits (days 1 through 361) plus the 12 total extrapolation prices as markers, ensure plot has a legend]

NOTE: your actual extrapolation values are not graded, you are only graded on your ability to extrapolate nor will these values be taken seriously to buy or sell stock ?

2. Lecture 12 – Singular Value Decomposition (SVD)

GOAL: Your client wants you to create an algorithm to compress image data to reduce mission data transmission time for a new satellite design. They want you to only keep 90% of the cumulative sum of singular values – meaning keep largest singular values starting with the largest and only keep those that add up to 90% of the total (see plot below).

Use this MATLAB stock image to test your approach and these commands to read image:

X = imread(‘ngc6543a.jpg’);

X = double(rgb2gray(X));

If you want more info on the image, see: Cat’s Eye Nebula – Wikipedia

a. Calculate the SVD of the image using the ‘econ’ option.

Plot the singular values (similar to how we did in the

“Lecture_12_SVD_face_demo.mlx” code.

[deliverables: plot of singular values]

b. Calculate the cumulative sum of the singular values

(your plot should look similar to example plot on right which came from

the clown image we discussed in class).

Determine the index (# of singular values) of the cumulative sum when

the percentage of cumulative sum of singular values is 90%.

[deliverables: plot of percentage of cumulative sum of

singular values & report the index]

c. Using the “whos” command, calculate the compression

ratio of original image to compressed image sizes in bytes (original divided by compressed)

[deliverables: show calculations & report your final compression ratio (bytes:bytes) should be greater than 3]

3. Lecture 13 – Fast Fourier Transform (FFT)

Load “chime2.mat” (it’s on CANVAS)

“N” is the number of samples

“audio_recording_data” is the not so good audio recording of the chime

“dt” is the time between samples

“t” is the time array

For example, you can plot or hear the recording:

plot(t,audio_recording_data)

sound(audio_recording_data)

a.

Compute the FFT of “audio_recording_data” using the “fft” command, compute the PSD, and plot the Power Spectral Density (PSD) (like how it was done in class)

[deliverables: plot with labels and legend. X axis must in units of Hz. Like how it was done in class]

b.

Report the first 6 frequencies in Hz that have a PSD value above 1E6

[deliverables: Table of frequencies in Hz]

c.

Denoise the data by the following (see class notes and code):

(1) Magnitude filter – only keeping PSD values above 1E6

(2) Frequency filter – only keep PSD values below 2000 Hz

[deliverables: plot PSD of denoised data – plot should look similar to the one below]

d.

Compute inverse FFT using the “ifft” command and plot the time domain signal from 0 to just under 2 seconds

[deliverables: plot time domain denoised data]Eigenanalysis and

Singular Value Decomposition (SVD)

1

Overview

• If we think of matrices a set of column vectors, can we figure out

the most important vectors (directions & magnitudes) and

disregard the rest?

• We will study 2 different approaches to computing the MOST

important characteristics (vectors) from our data:

(1) Eigenanalysis (only applies to square matrices)

(2) Singular Value Decomposition (SVD) (applies to any matrix)

Remember to take the daily quiz

Intro

We learned to think about matrices as a collections of vectors,

Now – “which vectors provide us the most important information about a matrix?”

• For example, what is the most important vector in this rank 1 matrix?

1 1 1

= 1 1 1

1 1 1

• Eigenanalysis (use “eig” command in MATLAB)

, = eig

0.41

= 0.41

−0.81

0.71

0.71

0

0.5774

0.5774

0.5774

0

& = 0

0

1

Ԧ1 = 1

1

0.5774

ො3 = 0.5774

0.5774

0 0

0 0

0 3

• Singular Value Decomposition (SVD) (use “svd” command in MATLAB)

, , = svd

−0.5774

= −0.5774

−0.5774

0.82

−0.41

−0.41

3 0

0.0

−0.71 & = 0 0

0 0

0.71

0

−0.5774

0 = −0.5774

0

−0.5774

0.82

−0.41

−0.41

0.0

−0.71

0.71

Intro

• Next example, what is the most important vector in this rank 2 matrix?

1

= 1

0

0.1 1.1

−0.1 0.9

0

0

• Eigenanalysis (use “eig” command in MATLAB)

, = eig

0.7641

= 0.6451

0

−0.0841

0.9965

0

0.1

Ԧ 2 = −0.1

0

Ԧ2 =

−0.0841

0.9965

0

1.0844

& =

0

0

−0.5774

−0.5774

0.5774

Think “direction” when thinking about eigenvectors

1

Ԧ1 = 1

0

0.7641

Ԧ1 = 0.6451

0

0

−0.1084

0

0

0

0

Think “magnitude” when thinking about eigenvalues

• Singular Value Decomposition (SVD) (use “svd” command in MATLAB)

, , = svd

−0.7418 −0.6706

0

2.0025

= −0.6706 0.7418

& =

0

0

0

0

1.00

0

0

0.173

0

0

−0.7053

0 = −0.0036

0

−0.7089

Note – sign flips are expected and not an issue

0.4113

−0.8165

−0.4052

−0.5774

−0.5774

0.5774

Eigenanalysis

• The word “eigen” is adopted from the German word that means

“characteristic” or “proper” [1]

• Originally eigenanalysis was used to study principle axes of rotation matrices

and inertia matrices

• Now, eigenanalysis is also used for stability, vibration, atomic orbitals, facial

recognition, matrix diagonalization, and other analyses/functions [1]

• Applies only to square matrices

• Equations:

Ԧ = Ԧ

where is a × matrix, Ԧ is a vector, and is a scalar

Also stated equivalently as

− Ԧ = 0ത

where is the × identity matrix

Eigenanalysis

• The “characteristic” polynomial has degree and can be factored in terms of

− = 1 − 2 − … −

• Where the “eigenvalues” 1 , 2 , … , are the “roots” of the polynomial

• The eigenvectors Ԧ corresponding to each eigenvalue can be computed from the

components of

− Ԧ = 0ത

• Using the “eig” command, MATLAB provides unit normalized eigenvectors in matrix

with corresponding eigenvalues in a diagonal matrix

, = eig

Where the first column of is ො1 which corresponds with the first eigenvalue 1 in on its

diagonal – see example below from before

0.7641

= 0.6451

0

−0.0841

0.9965

0

, = eig

1.0844

−0.5774

& =

0

−0.5774

0

0.5774

0

−0.1084

0

0

0

0

Eigenanalysis

• The matrix A can now be

represented as = −1

• One powerful application for

eigenanalysis is data reduction or

reduced order modeling

• We can use the most important

eigenpairs (values and vectors) to

represent the matrix

• In this case we take a rank 3 matrix

and represent it with a rank 1 matrix

≈ ො1 1 ො1−1

Very

similar

8

Exercises

See file “Lecture12_examples.mlx” on CANVAS

(1) Compute the eigenvalues and corresponding eigenvectors for matrix , , and using

the eig command. For example,

, = eig

(2) Create a rank 1 approximation for matrices A and B

(3) Create a rank 2 approximation for matrix C

(4) What is the norm of the difference between the original and low-rank approximations?

Singular Value Decomposition (SVD)

• You can think of SVD as a generalized version of the

eigendecompsition because it doesn’t require rectangular

matrices

• SVD is useful for:

– Low-rank approximations to matrices

– Pseudo-inverses of non-square matrices — least-squares solutions

– Principal components analysis (PCA)

• Reduced order data sets or models allow us to handle large

complex systems & data sets such as audio, image, or video

• We want to extract the dominant patterns or behaviors

• “Provides a systematic way to determine the dominant patterns

underlying a high-dimensional system” → provide a “low rank”

approximation to high-dimensional data [1]

Singular Value Decomposition (SVD)

• Say we want to analyze a large data matrix that has millions of rows and

thousands of columns

• For example, the columns of could be measurements collected from multiple

experiments or complete audio files or images

• Using SVD, this large rectangular matrix can expressed as = Σ ∗

– is any × matrix (typically ≫ meaning a tall skinny matrix)

– ∈ ℂ × and ∈ ℂ × are unitary matrices. “*” is the complex-conjugate

transpose. Columns of are called the “left singular” vectors and columns of are

the “right singular” vectors

– Σ is a diagonal matrix with non-negative singular values on the diagonal in rank order

• SVD example from earlier (use “svd” command in MATLAB): , , = svd

3 0 0

1

0.1 1.1

−0.5774 0.82

0.0

−0.5774 0.82

0.0

= −0.5774 −0.41 −0.71 & = 0 0 0 = −0.5774 −0.41 −0.71 where = 1 −0.1 0.9

0 0 0

0

0

0

−0.5774 −0.41 0.71

−0.5774 −0.41 0.71

Singular Value Decomposition (SVD)

The matrix X can be represented as the full or reduced SVD without loss

Fig from [2]

Matrix X can be compressed further by truncating and only keeping the largest

singular values

Fig from [2]

Example — Singular Value Decomposition (SVD)

• Think of a digital photo as collection of

pixels and their respective values

• From a SVD of this photo, we can rebuild

it starting from the highest going to

lowest singular values

• Original image 200 x 300 pixels & has a

rank of 200

• Low-rank approximations is one way to

compress the image data

• The magnitude of each singular value is

shown at in bottom 2 plots – we can

determine the best place to truncate to

get a good low-rank approximation

Example

– keep

first 50

Example

– keep

first 50

14

Exercises

(1)

Compute the singular values and unitary matrices for

matrix , , and using the svd command For example,

, , = svd

(2)

Create a rank 1 approximation for matrices A and B

(3)

Create a rank 2 approximation for matrix C

(4)

What is the norm of the difference between the original

and low-rank approximations?

Figure from [2]

Conclusions

• As engineers, we analyze large matrices that can have thousands

of columns and many hundreds of thousands of rows

• In this lecture, we studied 2 different approaches to computing

the MOST important characteristics from our data:

– Eigenanalysis

• PROS: produces eigenvectors and eigenvalues so we can focus on

the most important vectors

• CONS: applies to square matrices only

– Singular Value Decomposition (SVD)

• PROS: Applies to any matrix, we can focus on the most important

vectors, we can easily create low-rank approximations, typically

cutting out the noise

Interpolation, numerical

derivatives, and other topics

Don’t forget to open CANVAS during class and take the daily quiz

1

Overview

• Typical challenges or issues with measured data: noise,

discretization, incomplete, incorrect, inconsistent

• Interpolation – we will discuss 2 types

(1) Single polynomial through entire dataset

(2) Piecewise – linear and cubic splines

• Curve Fitting (a.k.a. linear regression or least-squares)

• Extrapolation (make predictions outside of measured range)

• How do we typically compute numerical derivatives?

Noisy Data

• Almost all

measurements have

noise

• Most of the time we

need to filter the data,

especially before trying

to compute its

derivative

• You can see the

amplitude of the

derivative of the noisy

data (blue) quickly

exceeds the derivative

of the cosine function

(both were computed

numerically)

Cosine function with random

noise and without

Numerical derivatives of

cosine function with random

noise and without (red line)

Discrete Data

Continuous data can be divided into smaller and smaller

units (i.e. non-divisible)

Examples: instantaneous time, temperature, velocity

Typically plot functions as lines

• Discrete data is counted and can

not be divided into smaller units

(example – # of students in

classroom, continuous data that

we sample, like 1x per second)

– Typically plot discrete data as

markers

– People tend to connect markers,

but this may not represent

interpolated data

Stairstep graph – think

“sample and hold”

Incomplete, incorrect, & inconsistent Data

• Incomplete data typically has

gaps, such there is a period of

time where the data was

collected or data is missing

between samples

• Incorrect data commonly shows

up when the data is being

transmitted

• Inconsistent data have variable

sampling periods and you may

need to resample the data to

ensure a fixed sampling period

Resample uniform or nonuniform data to

new fixed rate – MATLAB resample

(mathworks.com)

5

Interpolating Data

Interpolation produces estimates between

known observations by developing functions

that cross through a discrete set of known

data points and give us values in between the

known data

– Typical choices for spline interpolation:

We know the red dots, but we want a

function that gives us values in between

the known values

– Linear – straight lines between data points

– PROS: easy, continuous

– CONS: discontinuous derivatives, unrealistic

– Quadratic – 2nd order polynomials between data points

– PROS: continuous, smooth

– CONS: path doesn’t account for final velocity

– Cubic – 3rd order polynomials between data points

– PROS: continuous, most realistic

– CONS: continuous derivatives

6

MATLAB Example Linear Interpolation

– Using the interp1

command, we can

create 10 linear

functions that pass

through the 11

known values (see

circles) but provide

values at a 100x

smaller increment

size — see green

line

MATLAB Example Cubic Interpolation

– Using the interp1

command, we can

create 10 cubic

functions that pass

through the 11

known values (see

circles) but provide

values at a 100x

smaller increment

size — see green

line

Curve Fitting

– Two lectures ago, we looked at

the polyfit command — here

we will use MATLAB’s fit

command

– “Coefficient of determination,

in statistics, R2 (or r2), a

measure that assesses the

ability of a model to predict or

explain an outcome in the

linear regression setting.”

– Normalized between 0 to 1

where 1 means the function(s)

pass through all the measured

data

[F_poly1,gof]= fit(x,y,’poly1′)

[F_poly2,gof] = fit(x,y,’poly2′)

plot(F_poly1,’r-‘);

plot(F_poly2,’b-‘);

legend(‘data’,’1st order’,’2nd order’);

Curve Fitting App

(1) Get the Curve Fitter App

(2) Code in Command window or mfile:

x = linspace(0.1,10,100)’;

noise = (2*randn(size(x)));

y = exp((x – 0.5)/2)/10 + noise;

(3) “Select Data” x and y

(4) Select Fit type – try various types

and study R-square and RMSE

(5) Try to set exclusion rules

(6) Export Fit

Extrapolation

– Estimate of the value of

some function outside

the range of known

values

– “Extrapolation is a type of

estimation, beyond the

original observation

range” [1]

– Extrapolation is subject to

greater uncertainty and a

higher risk of producing

meaningless results [1]

See “Extrapolation” in AEE_3150_Lecture_11_examples.mlx

Methods to Differentiate Noisy Data

• If measured data is believed to be low noise, one can compute

interpolation functions and then compute derivatives – pick

interpolation functions that have continuous derivatives

• If data is noisy, then a curve fit may be the best initial step

followed by differentiating the curve fit

• We will study a different method in the upcoming Fast Fourier

Transform (FFT) lecture data that can be represented with sines

and cosines

Exercises

– There are 4 data sets in the file

called “Lecture11_data.mat” with

the following names:

– “x1” “y1_validation” &

“y1_noisy”

– Each data set is a matched set –

meaning x2 goes with y2

– Use the Curve Fitting App to curve

fit, interpolate, and extrapolate

– Share your R-square and Root

Mean Square Error (RMSE) for

each data set

For each plot, the red dots are

the “noisy” data, and the blue

lines is the “validation” data

13

Exercise — for each data set,

perform these actions

Data Set 1: curve fit the noisy data to estimate the amplitude and frequency

1 = cos 2 ∙ 1

Data Set 2: What is the best curve fit function you can find – how does it compare to

the validation data and function:

1

2 =

1 + 22

Data Set 3: Compare your best curve fit to extrapolate values for x values between

10 and 15. Compare your values to the function:

3 −0.5

3 = 0.1 ∙ 2

Data Set 4: Compare your slope and intercept estimate to the original function:

4 = 0.867 4 − 2.022

14

Conclusions

• There are many typical challenges or issues with measured data, such as

noise, discretization effects, incomplete, incorrect, and/or inconsistent data

• We studied 3 approaches to model data sets:

(1) Interpolation – we are passing functions through known data points using piecewise

linear and cubic splines

(2) Curve Fitting (a.k.a. linear regression or least-squares) – is a great approach for

approximating noisy data with 1 smooth function

(3) Extrapolation (make predictions outside of measured range)

• We used the “Curve Fitting App” – this app is very useful because you can

quickly produce interpolate data, create curve fits, and extrapolate data

• How do we typically compute numerical derivatives?

Fourier Series and the

Fast Fourier Transform (FFT)

1

Overview

• Taylor Series approximates use polynomials [1]

∞

+

′

1!

− +

′′

2!

−

2

+

′′′

3!

−

3

+⋯

• Taylor Series does not work well on sinusoids

because you need infinite terms, approximating

sinusoids with sine and cosine functions works

really well → Fourier Series [2]

∞

−

=0

∞

2

2

~ 0 + cos

+ sin

=0

( )

!

Taylor Series Approximation of sine

wave. Fig from [3]

=0

• Fast Fourier Transform (FFT) computes coefficients

for large data sets very quickly and the inverse FFT

(iFFT) allows you to transform back

• We only look at discrete data and 2 different forms

of Fourier Coefficients: real- and complex-valued

Fourier Series

Approximation

of square

wave. Fig

from [4]

FFT History

• The Fourier Series is named in

honor of Jean-Baptiste Joeseph

Fourier (1768–1830) [2]

• Gauss originally invented FTT in

1805 but it wasn’t published or

recognized until later [2]

• Cooley and Tukey created 1965

and get the credit for

implementation [2]

Example

seismometer -mass moves up and

down with the

motion of Earth’s

crust

Fig from [2]

Example seismometer data

Fig from [1]

Watch Intro: 0-33 sec

Watch description: 7:26 through 10:20

Fourier Series

• Imagine we have collected discrete samples

−1

1

0

0

0 1 2

⋯

−1

0

(could be spatial, temporal, or both)

• Approximate signal with a truncated series of sines and cosines

2

2

≅ 0 + cos

+ sin

=1

1

0 = −1

=1

“The average”

ℎ ℎ #

=1

2

= −1 cos

=1

2

= −1 sin

=1

“Magnitude of each cosine term” “Magnitude of each sine term”

• Example file: “AEE_3150_Lecture_13_Fourier_Series_Example.mlx”

Fourier Series – MATLAB Example

(we are only discussing key lines of code, not plotting and tables)

Step 1) Create delta X – what size do we want to cut L up into?

dx = 0.005;

Step 2) What is the length of one period (time or space)?

L = 1; % could be time or a spatial dimension that defines the units

Step 3) How do we create a “x” array over the length L? What is N?

x = dx:dx:L;

Step 4) Create the function – what are the magnitudes and frequencies?

f = 1*sin(2*pi*x) + 0.5*sin(4*2*pi*x)+ 0.25*sin(8*2*pi*x);

Step 5) Create variable for the Fourier Series (FS)

f_FS = zeros(size(x));

Step 6) Calculate the A0 “average” term for the FS

A0 = (1/L)*sum(f.*ones(size(x)))*dx;

1

0 = −1

Step 7) Create new figure and plot A0 term (average)

plot(x,A0*ones(size(x)),’k-‘,’Color’,[0.94 0.86 0.86]); hold on;

=1

Fourier Series (FS) – MATLAB Example

(we are only discussing key lines of code, not plotting and tables)

Step 8) How many FS coefficient terms do we want?

T = 10;

Step 9) Loop over FS coefficient terms – show how approximation improves

for term_ctr = 1 : T

Step 10) add in the first term – “the average”

f_FS = A0;

Step 11) Loop over K to calculate

for K = 1 : term_ctr

=

2

−1 cos

=1

Step 12) calculate Ak coefficient

Ak(K) = (2/L)*sum(f.*cos(2*pi*K*x/L))*dx;

Step 13) calculate Bk coefficient

2

= −1 sin

Bk(K) = (2/L)*sum(f.*sin(2*pi*K*x/L))*dx;

=1

Step 14) calculate the FS time/spatial domain approximation

f_FS = f_FS + Ak(K)*cos(2*K*pi*x/L) + Bk(K)*sin(2*K*pi*x/L);

=1

=1

2

2

≅ 0 + cos

+ sin

Fourier Series – MATLAB Example

2 term FS

approximation

Original signal is

shown in black

1 term

approximation

is the average

3 term FS

approximation

4 term FS

approximation

Tabular Form

A0 term

B1 term

B4 term

B8 term

Exercise

Fast Fourier Transform (FFT)

• Imagine we have collected discrete samples

0

⋮

−1

−1

1

0

0

0 1 2

⋯

DFT

Data points in

temporal and/or

spatial domain

−1

መ0

⋮

መ −1

The “hat”

indicates

Fourier

coefficients in

the complex

frequency

domain

• Write Series using Euler’s Formula: = cos + sin

−1

• Discrete Fourier Transform (DFT) መ = −2 Τ

=0

There are 2 values

changing here: &

= −2 Τ where & will vary from 0 to − 1 in the

summation above

FFT

−1

መ = −2 Τ

= −2 Τ

=0

ℎ

=0

ℎ

መ0

1

=

⋮

መ −1

= −1

Fourier

coefficients

All complex

valued (magnitude

and phase)

1

1

⋮

⋮

⋮

1

1

1

1 1

1

2

3

−1

⋯

0

2 4

⋮

⋮

⋱

⋮

⋮

−1

⋱

⋮

⋮

2

−1 ⋯ ⋯ ⋯ −1

Column

all 1s

because

n=0

Row all 1s

because K=0

Data points in

temporal and/or

spatial domain

DFT matrix – it is a very large N by N matrix.

You never compute Fourier coefficients with

this matrix in this form – reorganized version

of this matrix is the FFT

Step 1) create dt

Step 2) create a

time vector

Step 3) Create the

clean discrete

signal

Step 4) Add noise

to the clean signal

Step 5) plot

Questions:

Is there a signal

hidden in the

noise?

Can we extract the

signal?

11

Step 6) N is number of samples

Step 7) Compute FFT of clean signal

Step 8) Compute Power Spectral

Density (PSD) of clean signal

Step 10) Create the x-axis

Step 11) Compute FFT of noisy

signal

Step 12) Compute PSD of noisy

signal

Step 13) Create the x-axis

Step 14) Create an index to only plot

½ of the FFT

QUESTIONS:

(1) What is the difference between

the clean and noisy PSDs?

(2) What are the 2 spikes in the

clean PSD?

(3) Can you filter the noise from the

noisy PSD?

Magnitude Filter

12

Step 15) find indices where

PSD is above the cutoff

Step 16) Create new PSD for

signal above cutoff

Step 17) Apply cutoff to the

FFT coefficients

Step 18) Inverse FFT to

create a denoised signal

Step 19) plot the denoised

signal

Step 20) plot the PSD of the

denoised signal

13

Exercise

See file: ‘Lec13_Noisy_Signal_Student.mat’

QUESTIONS:

(1) Is there a

hidden signal?

(2) What are its

frequencies and

magnitudes?

Conclusions

• We learned that approximating signals with sine and cosine

functions works really well with the Fourier Series

• We learned how the Fast Fourier Transform (FFT) computes

coefficients for large data sets very quickly and the inverse FFT

(iFFT) allows you to transform back

• We looked two different forms of the Fourier Series: real- and

complex-valued (both work well)

Purchase answer to see full

attachment

**We offer the bestcustom writing paper services. We have done this question before, we can also do it for you.**

#### Why Choose Us

- 100% non-plagiarized Papers
- 24/7 /365 Service Available
- Affordable Prices
- Any Paper, Urgency, and Subject
- Will complete your papers in 6 hours
- On-time Delivery
- Money-back and Privacy guarantees
- Unlimited Amendments upon request
- Satisfaction guarantee

#### How it Works

- Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
- Fill in your paper’s requirements in the "
**PAPER DETAILS**" section. - Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
- Click “
**CREATE ACCOUNT & SIGN IN**” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page. - From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.