1, Problem description
The following is the original problem description on math stackexchange.
"Since future market prices, and the effect of large sales on these prices,
are very hard to predict, brokerage firms use models of the market to help them
make such decisions. In this problem, we will consider the following simple
model. Suppose we need to sell “x” shares of the stock in a company, and suppose
that we have an accurate model of the market: it predicts that the stock price
will take the values P1, P2, …, Pn over the next “n” days. Moreover, there is a
function f(.) that predicts the effect of large sales: if we sell “y” shares on
a single day, it will permanently decrease the price by f(y) from that day
onward. So, if sell “y1” shares on day 1, we obtain a price per share of
"p1-f(y1)”, for a total income of “y1•(p1-f(y1))”. Having sold “y1” shares on
day 1, we can then sell “y2” shares on day 2 for a price per share of
“p2-f(y1)-f(y2)”; this yields an additional income of “y2•(p2-f(y1)-f(y2))”.
This process continues over all “n” days. (NOTE, as in our calculation for day
2, that the decreases from earlier days are absorbed into the prices for all
later days.)
Design an efficient algorithm that takes the prices “p1,…,pn” and the
function f(•) (written as a list of values f(1), f(2), …, f(x)) and determines
the best way to sell “x” shares by day “n”. In other words, find natural numbers
“y1, y2,…, yn” so that "x=y1+..+yn” and selling y1 shares on day “i” for “i=1,
2,…,n” maximizes the total income achievable. You should assume that the share
“p1”, is monotone decreasing, and f(y) is monotone increasing; that is, selling
a larger number of shares causes a large drop in the price. Your algorithm’s
running time can have a polynomial dependence on “n” (the number of days), “x”
(the number of shares), and “y” (the peak price of the stock).
EXAMPLE: Consider the case when n=3, the prices for me three days are 90, 80,
40; and f(y) = 1 for y≤40000 and f(y) = 20 for y>40000. Assume you start whit
x=100000 shares. Selling all of them on day 1 would yield a price of 70 per
share, for a total income of 7000000. On the other hand, selling 40000 shares on
day 1 yields a price of 89 per share, and selling the remaining 60000 shares on
day 2 results in a price of 59 per share, for a total income of 7000000."
2. Analysis
From the applied mathematical point of view this is a typical optimization problem. Its objective is to maximize the sale revenue under the time and price constraint.
Objective function: max(sigma(yi*pi - sigma(f(yj))*yi)), where
i: within [1, n], - time constraint to sell all the shares
yi: shares to sale at i-th day, subject to
sigma(yi) = x, - quantity constraint to sell all the shares within n days
sigma(f(yj)): j within [1, i] - accumulated price discount due to stock sale in
previous i-1 days and the current the i-th day
Any modelling software can be applied to solve this problem, for instance Matlab[1] and gPROMS ModelBuilder[2].
Here I would like to use dynamic programming to solve this problem, simply utilizing modern tools computation power and endless memory. Here is the dynamic programming formulation:
MaxSale(P, X, N) = max{yi*pi - sigma(f(yj))*pi + MaxSale(P, X-yi, N-1)} OR
= yn*X - sigma(f(yj))*X, when come to the last day - have to sell all no
matter what price it is to maximize the revenue
, where
yi within [0, X]
3. C++ Implementation
// Code
//********************************************************************************
#include <ctime>
#include <iostream>
#include <unordered_map>
// Generate pi
double GetPredictedSharePrice(size_t day)
{
// Hard coded for the example
const double prices[] = { 90.0, 80.0, 40.0, 30.0 };
return prices[day];
}
// Generate f(yi)
double GetEffectOnSharePriceFromLargeSales(size_t shares)
{
if (shares == 0) {
return 0;
}
if (shares <= 40000) {
return 1;
}
else {
return 20;
}
}
// Key used for hash table in caching purpose
struct StockExchangeKey {
size_t cur_day;
size_t remaining_share;
double accumalted_price_discount;
bool operator() (const StockExchangeKey& right) const {
return cur_day == right.cur_day &&
remaining_share == right.remaining_share &&
accumalted_price_discount == right.accumalted_price_discount;
}
};
// Hash function for use-defined hash key
struct StockExchangeKey_Hash {
size_t operator() (const StockExchangeKey& key) const {
return std::hash<size_t>()(key.cur_day) ^
std::hash<size_t>()(key.remaining_share) ^
std::hash<double>()(key.accumalted_price_discount);
}
bool operator() (const StockExchangeKey& k1, const StockExchangeKey& k2) const {
return k1.operator()(k2);
}
};
typedef std::unordered_map<StockExchangeKey, double, StockExchangeKey_Hash, StockExchangeKey_Hash> StockExchangeHashMap;
// dynamic programming solution
double StrategyForSale(size_t duration_days,
size_t minimal_quantity_for_sale,
size_t remaining_shares,
size_t cur_day,
double accumulated_price_effect_over_sale,
StockExchangeHashMap& SE_Cache)
{
if (remaining_shares == 0) {
return 0;
}
double priceOfDay = GetPredictedSharePrice(cur_day);
if ((cur_day + 1) == duration_days) {
// At the last day, sell whatever in hand with
// whatever price to cash in
double priceForSale = priceOfDay - accumulated_price_effect_over_sale -
GetEffectOnSharePriceFromLargeSales(remaining_shares);
if (priceForSale > 0) {
return priceForSale * remaining_shares;
}
// should not hit here because the share price must be > 0
return 0;
}
// Look for computed sub-problem in the hash table
StockExchangeKey key{ cur_day,
remaining_shares,
accumulated_price_effect_over_sale };
{
StockExchangeHashMap::const_iterator constIt = SE_Cache.find(key);
if (constIt != SE_Cache.end()) {
return constIt->second;
}
}
double maxShareValue = 0;
double shareValue;
double accumPriceEffectOverSale;
for (size_t sharesForSale = 0; sharesForSale <= remaining_shares;
sharesForSale += minimal_quantity_for_sale) {
accumPriceEffectOverSale = accumulated_price_effect_over_sale +
GetEffectOnSharePriceFromLargeSales(sharesForSale);
// values of selling the number of "shareForSale" shares
shareValue = (priceOfDay - accumPriceEffectOverSale) * sharesForSale;
// add up the values of rest of share values
shareValue += StrategyForSale(duration_days,
minimal_quantity_for_sale,
remaining_shares - sharesForSale,
cur_day + 1,
accumPriceEffectOverSale,
SE_Cache);
if (shareValue > maxShareValue) {
maxShareValue = shareValue;
}
}
// cache the sub-problem
SE_Cache.insert(std::make_pair(key, maxShareValue));
// std::cout << "maxShareValue: " << maxShareValue << std::endl;
return maxShareValue;
}
double MaxValueOfStockExchange(size_t days, size_t minimalQuantityForSale, size_t shares)
{
StockExchangeHashMap seCache;
return StrategyForSale(days, minimalQuantityForSale,shares, 0, 0.0, seCache);
}
// test
int _tmain(int argc, _TCHAR* argv[])
{
clock_t startTime = clock();
double maxValue = MaxValueOfStockExchange(3, 100, 100000); // 7.42e+6
std::cout << "N = 3, step = 100, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
startTime = clock();
maxValue = MaxValueOfStockExchange(3, 10, 100000);
std::cout << "N = 3, step = 10, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
startTime = clock();
maxValue = MaxValueOfStockExchange(3, 1, 100000);
std::cout << "N = 3, step = 1, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
startTime = clock();
maxValue = MaxValueOfStockExchange(4, 100, 100000);
std::cout << "N = 4, step = 100, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
startTime = clock();
maxValue = MaxValueOfStockExchange(4, 10, 100000);
std::cout << "N = 4, step = 10, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
startTime = clock();
maxValue = MaxValueOfStockExchange(4, 1, 100000);
std::cout << "N = 4, step = 1, X = 100000: "
<< double(clock() - startTime) / (double)CLOCKS_PER_SEC
<< std::endl;
std::cout << "maxValue: " << maxValue << std::endl;
return 0;
}
//********************************************************************************
4. Further Discussion
As we discussed in Section 2. Using dynamic programming to solve this problem is simply using the brutal computing force of computer. And how complex this solution is depends on the search space and how it is quantified. Based on the nature of this problem the search space can be easily/intuitively discretized via day in time constraint and via per share in quantity constraint.
Search depth:
This is the decided by the time constraint, via per day, N days. y1 + y2 + ... + yn = X. For each solution fist fix y1, then y2, then ..., yn = X - (y1 + y2 + ... + yn-1) in order not to violate this constraint. This also means that the search depth is the how many days in advance this optimization problem is constrained to.
Search Breadth:
This is dependent on how many options for each yi has. For y1 it has (X+1) options, from 0, 1, ..., X-1, X. For y2 it has (X - y1 + 1) options, 0, 1, 2, , ..., X- y1. And finally yn it has (X - y1 - ... - yn-1 + 1) options, 0, 1, 2, ..., X - y1 - y2 - ... - yn-1.
Based on the search depth and search breadth, the complexity of this algorithm is exponential, ~O(POW(X, N)). This is not really a valid algorithm solution and something has to be done to improve this algorithm.
Caching:
As the examples I have shown caching before in Heavy Numbers (Part I) and Heavy Numbers (Part II), this problem has no exception. The caching technique is also applicable to this problem in order to avoid the repeated computing of the same sub-problems. It is worth pointing out that caching does not improve the performance when N = 3, because all the sub-problems are the leaf-nodes which are not cached in this implementation. Because I believe the simple computation should be faster than pushing into hash map, computing the hash value and locating the record. The caching will kick in when N >=4.
Here is the table of time spent to solve the example in Section 1. The search breadth is equal to the total share to sell divided by minimal sale.
// Table - Performance on Microsoft Visual Studio 2013, Win32 and Release
// Machine: Intel(R) Core(TM) i3-3227U CPU @1.9GHz 1.9GhHz, 6G RAM
// Window 8.1 64-bit
//********************************************************************************
X = 100000, X = 100000 X = 100000
Minimal Sale = 100 Minimal Sale = 10 Minimal Sale = 1
Search Breadth = 1000 Search Breadth = 10000 Search Breadth = 100000
N = 3 0.019 seconds 1.026 seconds 82.389 seconds
N = 4 0.186 seconds 8.312 seconds 1070.13 seconds
//********************************************************************************
From the performance of these experiments, we can see how the "Minimal Sale" is to affect the performance. Literally the bigger, the faster the solution runs. The explanation is quite simple - it reduce the search breadth. As for N there is really nothing we can do about it because the bigger it is the more advantage the business can gain by the assumption that this optimization does bring maximal profits. Ideally the larger N is, the better it is for business and the slower it runs.
Output:
Simply returning the maximal value of stock sale may not be the ideal for business, because a dealer may care more how to reach this maximal value. Both the strategy to sale, y1, y2, ..., yn, and the maximal value of the sale should be returned.
Bibliography:
[1] http://www.mathworks.co.uk/products/matlab/
[2] http://www.psenterprise.com/modelbuilder.html
No comments:
Post a Comment