How to find the optimal price on Shopify when you don’t know competitors’ prices
- Admin
- 5 godzin temu
- 7 minut(y) czytania
Pricing on Shopify is easy when the market gives you a ready-made answer: many stores sell the same product, so you can see price ranges, promotion standards, and a “market average.” The challenge starts when there is no identical equivalent of your product — not because there’s no competition at all, but because there’s no 1:1 product you can fairly compare against.
In those situations, it’s very easy to make two expensive mistakes:
Undervalue the product “just to be safe” (and never learn what customers are actually willing to pay).
Set the price too high without validation (and conclude “the product doesn’t sell,” even though the real issue is a pricing barrier or weak value communication).
The good news: lack of identical comparisons doesn’t mean you must guess. It simply means that instead of competitor benchmarking, you need a research + experimentation process: WTP surveys + price testing + profit analysis.

What are “products with no identical competition” (and why it matters)?
In this article, I’m talking about products that are unique in terms of comparability: a customer may find alternatives, but won’t find the same offer in the same specification. The most common cases:
Brand-designed and brand-produced products (custom molds, original design, unique materials or formulas) — the category may be popular, but your version has no “twin.”
Personalized products (engraving, configurators, custom options) — the final product varies by parameters, so each order is effectively different.
Limited runs and drops — availability is limited in time or quantity, so there’s no stable “market price.”
Bundles and sets (packs, boxes, kits) — even if individual items exist elsewhere, the bundle’s value creates a new product.
Products with a service component (e.g., consultation + product, after-sales support, fitting, extended warranty) — customers compare the full experience, not just the “thing.”
Handmade / artisan / premium products — differences in craftsmanship, materials, process, and brand reputation make “same thing, cheaper” comparisons misleading.
Why does this definition matter? Because in these categories, price doesn’t come from simple market observation. Instead, price is a function of:
perceived value (how customers understand the difference between your offer and alternatives),
willingness to pay (WTP),
demand elasticity (how strongly demand reacts to price changes),
and your real cost constraints.
1) What does “optimal price” mean in e-commerce?
In practice, you have at least three “optimal” prices — depending on the goal:
Profit maximization: how much you keep after variable costs, marketing, and returns.
Revenue maximization: useful for scaling, but often deceptive (you can grow revenue and still lose cash).
Penetration / share maximization: sometimes rational early on, when you’re building a customer base and reviews.
If you don’t have competitor benchmarks, the most reliable approach is:optimize for profit per visitor (or contribution margin per visitor).
Why “per visitor”? Because you’re directly tying price to how much you earn from traffic — regardless of whether you have 100 or 1,000 sessions in a given week.
2) Set constraints — minimum price and target margin
Before you start testing, calculate a hard price floor. That prevents you from “winning” a price test with a price that sells — but quietly destroys the business.
Minimal (practical) model
Calculate contribution margin per unit:
Contribution margin = Price – (COGS + packaging + payment fees + fulfillment + average return cost)
If you buy traffic (ads), also add:
CAC / acquisition cost (average per order),
a buffer for seasonality and ad cost volatility.
Result: you get a minimum price. You don’t test below it, because “higher sales” can easily mean “worse business.”
3) If the product already sells: incremental tests and interval tests
Already getting sales? Great — you can rely on real buyer behavior, not just declarations.
Method A: “Small steps” (incremental price increases)
This is the safest way to grow margin on a stable SKU.
increase price by 0.5–1% every 7–14 days (or per purchase cycle if it’s longer),
compare profit per visitor and profit per 1,000 sessions (not only order count),
stop if profit drops for two consecutive steps (to avoid being fooled by randomness),
if it drops, roll back one step and treat that as your baseline.
Why it works: small changes often go unnoticed, and you gradually find the demand boundary.
Method B: interval testing (A/B over time)
If you want to test a bigger gap (e.g., 159 vs 189), use alternating periods.
Example schedule (4 weeks):
Week 1: Price A
Week 2: Price B
Week 3: Price A
Week 4: Price B
This spreads seasonality and “market mood” across both prices. It’s not a perfect split test, but it’s realistic and often sufficiently reliable.
Important: keep conditions comparable (traffic mix, ad budgets, no major offer changes mid-test), otherwise you’re testing everything at once.
4) If the product is new: skimming vs penetration
A new product with no sales history needs a launch strategy. Two sensible paths:
Penetration (from the bottom)
start low (but above your minimum price),
collect first orders, reviews, and UGC,
then raise price using “small steps.”
Pros: faster proof (reviews, social proof).Cons: risk of anchoring your price too low (harder to move into premium later).
Skimming (from the top)
start higher,
reduce price by 1–2% every 7–14 days,
stop when you reach acceptable volume and profit.
Pros: better for premium/limited products; you don’t train the market to expect “cheap.”Cons: requires patience and strong value communication (high prices expose weak positioning fast).
5) WTP research without competitors: Van Westendorp and Gabor-Granger
When nobody sells an identical product, price doesn’t “come from the market” — it comes from what customers believe the product is worth. That’s exactly what WTP (willingness to pay) methods measure.
The key principle: WTP studies don’t replace real sales testing — they precede it and guide it.They give you a sensible range and clear price hypotheses before you start experimenting on live traffic (and paying for mistakes).
5.1. When WTP makes the most sense
WTP is especially useful when:
you’re launching a new product and don’t want to start blind,
your traffic is low and sales tests would take months,
you want to narrow down to 2–4 strong price points to test,
you’re entering a new market (different price sensitivity),
the product’s price is driven heavily by brand and perceived value.
5.2. Two WTP pitfalls (so you don’t draw the wrong conclusions)
Pitfall 1: stated intent ≠ real buying behavior.In surveys, people answer differently than they buy — because they don’t feel the “pain of paying” and lack full purchase context.
Pitfall 2: weak value context.If respondents don’t understand the offer, they’re not pricing the product — they’re pricing their imagination.
How to reduce bias (practically):
target the survey to your real audience (newsletter, IG followers, retargeting),
show a mini product card: image + 3–5 value bullets + one line about materials/process + shipping/returns terms,
ask people as close to purchase intent as possible (after product-page visit, after waitlist signup),
treat the result as a playing field, not a final verdict.
5.3. Van Westendorp (Price Sensitivity Meter) — for defining a price range
Van Westendorp helps you build an acceptable price range and find thresholds where price becomes:
“too cheap to be believable” (quality suspicion),
“cheap” (a good deal),
“expensive but still consider-worthy,”
“too expensive” (rejected).
The four questions (logic)
In the classic version, you ask at what price the product is:
so cheap that it raises doubts about quality,
cheap (good value),
expensive, but still worth considering,
so expensive that it’s not worth buying.
What you get (why it’s worth doing)
From responses you build distributions that intersect at key points:
Acceptable range — between “too cheap” and “too expensive.”This is your test playground.
IPP (Indifference Price Point) — where the price shifts from “cheap” to “expensive” in market perception.
OPP (Optimal Price Point) — where “too cheap” and “too expensive” perceptions balance (often a strong starting point).
How to use it on Shopify (without academic analysis)
Identify the acceptable range (e.g., 120–190 PLN).
Choose 3 test prices:
lower-middle (e.g., 139–149),
middle (e.g., 159),
upper-middle (e.g., 179).
Run a sales test and choose based on profit per visitor.
Most common Van Westendorp mistakes
surveying random people (not your customer),
providing no value context (respondent doesn’t understand the offer),
mixing markets (PL and DE in one survey),
treating OPP as “the one true price” (it’s a hypothesis, not a decree).
5.4. Gabor-Granger — for finding the “demand break” point
Gabor-Granger is more direct: instead of “is this expensive?”, you ask for purchase likelihood at a specific price.
How it works
Show price X and ask: “Would you buy at this price?” (yes/no)or “How likely are you to buy?” (1–5 / 1–10).
If “yes” → show a higher price.
If “no” → go lower.
This builds a curve: where demand holds, and where it drops sharply.
What you get
the point where acceptance starts collapsing (a barrier),
a small set of price points worth testing in the store,
a clear map of “how much acceptance does +10% price cost you?”
How to choose price levels
The simplest approach:
use 5–7 price levels,
use percentage steps (e.g., 5–10%) because they’re more realistic for higher-priced products,
start at a “middle” price and move up/down adaptively.
How to reduce stated-intent bias
target people with real purchase context,
ask when they’re warmed up (after product-page exposure),
immediately validate 2–3 prices via a store test.
5.5. WTP in practice: a fast “survey → sales test” workflow
If you want to implement this without a PhD:
Survey (2–5 days)Van Westendorp (range) or Gabor-Granger (price points).
Pick 3 test prices (1 day)
“safe” price (lower-middle),
“ambitious” price (middle),
“premium” price (upper-middle).
Store test (2–4 weeks)Ideally a parallel split test. If traffic is low, run an interval test.
Decide based on profitChoose the variant with the best profit per visitor, while keeping volume and returns acceptable.
6) Real price A/B tests on Shopify (without naming specific tools)
If you have meaningful traffic, parallel split tests are best because they remove much of seasonality.
On Shopify, it’s worth using off-the-shelf experimentation tools (apps or CRO platforms), because they solve common technical issues:
assigning users to a price variant,
keeping pricing consistent for returning visitors,
reporting results,
reducing implementation errors.
If traffic is smaller, interval tests (A/B over time) still work — you just need comparable conditions.
7) How to analyze results: metrics that actually matter
The easiest mistake in pricing is focusing on “units sold.”
Use four layers of metrics:
Profit per visitor (or contribution margin per visitor)
AOV and basket mix (does price change bundling / cross-sell?)
CR (conversion rate)
Refund/return rate + support load (does higher price change expectations and returns?)
If you buy traffic, also monitor:
profit after ad costs,
stability over time (one-off spike vs trend).
8) If price cuts don’t work: the problem is often not price
A common pattern with unique products:
"I lower the price, and sales barely change."
That often means customers don’t clearly understand what they’re buying and why it’s worth the price. In that case, pricing tests won’t move much until you fix value communication.
What typically moves the needle most:
photos (scale, usage context, craftsmanship details),
video (product in use),
quality proof (materials, process, guarantees),
reviews and UGC,
offer design (bundles, personalization, bonuses),
only then “price tuning.”





Komentarze