Recommendation: place an interlibrary loan with full citation details and explicit condition notes, and contact the special collections desk to confirm format (paper folio or scanned PDF). research use is best served by ordering a high-resolution scan; physical handling often requires a two-week quarantine and restricted access.
Content summary and handling notes: the pamphlet awakens a public debate and is explicitly addressed to both household stewards and legislators; its rhetoric is prone to literal readings if quoted out of context. Social sanctions described in the text were legally enforced in several jurisdictions at the time, and the author anticipates a meed of social rebuke. Expect processing delays of several weeks from request to delivery. At the beginning of the preface, according to contemporary reviews, the argument is compact and staged for dramatic effect.
Catalog tips and interpretive angles: list any holdings under subject headings that include domestic economy, gender critique, or commercial alliances; some catalog records carry the tag csaethiopia or similar legacy codes – include that string when searching aggregated databases. Trace the work’s influence through subsequent pamphlets and legislative rejoinder pieces; a surviving marginal note reads in archaic tone that “thou shalt” reinterpret conventional roles. Cite the exact word used in your notes to avoid misquotation. Researchers should pay particular attention to passages where public space is described as invaded by private interests, where policy failure is framed as a brittle shell protecting privilege, and where the text marvels at the social wonders and at the relational dynamics between economic exchange and domestic obligation.
Marriage as a Trade – Cicely Hamilton (1909) & Sample Size Determination and Sampling Techniques
Recommendation: Use formula-driven sample sizes and adjust for design and nonresponse before fieldwork; for proportions n = (Z1−α/sub2² · p(1−p))/E² (95% CI, Z=1.96, p=0.5, E=0.05 → n=384); for means n = (Z² · σ²)/E² (σ=10, E=2 → n≈97). Apply finite-population correction n_adj = n/(1+(n−1)/N) when N is small and inflate by expected nonresponse rate (example: 10% nonresponse → n_final = n_adj/0.9).
- Power calculations: set α=0.05, power=0.80 (Zβ≈0.84). Detecting a 10%-point difference (p1=0.50 → p2=0.40) requires ≈387 per arm; with DEFF=1.5 → ≈580 per arm.
- Design effect: DEFF = 1 + (m−1)·ICC. If ICC=0.02 and cluster size m=30 → DEFF≈1.58; multiply base n by DEFF and then add nonresponse.
- Stratification: allocate sample proportional to strata size unless precision targets demand optimal allocation; reweight post‑stratification to correct for differential response from workers, bread-winning households, or other existing subgroup imbalances.
- Cluster sampling: choose number of clusters to minimize between-cluster variance–prefer more clusters with smaller m when ICC>0.01; practical minimum: 20 clusters per arm for comparative studies.
- Systematic sampling: acceptable on a random-ordered site list; avoid if periodicity correlates with the outcome (flat periodic patterns create bias).
- Pilot & σ-estimation: run small workshops (n≈30–50) to estimate σ and p before final calculation; researchers must record habitual patterns and twin-shocks events that inflate variance.
Operational checklist for field teams:
- Estimate base n for target precision and then: apply FPC, multiply by DEFF, inflate for nonresponse, round up to nearest practical cluster size.
- Document site selection rules, worker roles, and mate networks to avoid illogical exclusions; protect against selection bias among ordinary and marginal subgroups (spinster, bread-winning mother, grown children households).
- Pre-register assumptions (p, σ, ICC) and defend them with pilot data; if challenged, present sensitivity analyses showing how n changes if p shifts by ±0.10 or ICC by ±0.02.
- Use probability sampling where possible; if convenience or purposive sampling is used, explicitly state limitation and provide estimate bounds using bootstrap or weighting adjustments.
- When dealing with clustered interventions, predict intracluster correlation and plan for at least 80% power at the adjusted sample size; run simulations if design is complex.
Recommended documentation each study must include:
- Clear statement of the standard error target and margin of error; numeric example calculations for chosen Z, p, σ, ICC, m, DEFF, and nonresponse rate.
- Transparency about existing deficiencies in sampling frames and corrective actions taken (replacement rules, replenishment of lists, weighting approach).
- Codebook for conduct of fieldwork, listing responsibilities for supervisors and enumerators, how habitual refusals are recorded, and how twin-shocks (two concurrent disruptions) are handled in time-series panels.
- Log of workshops and training sessions for enumerators; include at least one replicate inter-rater reliability exercise per site to estimate observer variance.
Practical notes and warnings: do not accept reverent assumptions about homogeneity; again check subgroup sizes (mates, mothers, spinster categories) before stratifying. If an estimate is defended solely by convenience, label it exploratory and avoid causal language. Use predictive checks to assess whether sample will predict key outcomes given current variability; if predictive power is low, increase n or refine measurement to reduce σ. Mention sthephen in metadata only as a tag or case ID when needed, not as analytical shorthand.
Final responsibilities: assign one analyst to compute and archive all sample-size scripts and one field lead per site to ensure protocol stands; responsibility for final estimates should be shared between data manager and lead researcher to reduce illogical reporting and to ensure estimates of standard errors and confidence intervals are reproducible.
Textual Sampling Frame: Selecting Passages for Quantitative Analysis
Select 40 passages of 230–270 words each (target 250) and allocate them equally across opening, middle and closing thirds; reserve stratification cells for dialogue-only, narrative-only and mixed scenes so each cell contains n=~13 passages. Use this fixed-size rule rather than variable excerpts to maintain comparable word counts for frequency normalization.
If the full text length L is known, compute interval I = floor(L / 40). Choose a single random start s in [1,I]; sample passages at s + k*I (k=0..39). If an edition came with supplementary paratexts or notes potentially endowed with editorial material, remove those from the sampling frame and record their disposal in metadata. Where page/line numbers differ across printings, map to a canonical word-index before interval calculation so selection remains replicable.
Define coding unit = contiguous 250-word passage; treat clause- or sentence-splitting only if a passage begins or ends mid-sentence, in which case extend to the nearest clause boundary without altering target length by more than ±20 words. To avoid instinctive or convenience sampling, generate selection with a seed and preserve the seed in documentation. If final sample went through manual adjustment, annotate why passages were narrowed or replaced and report the replacement logic; substitutions should be less than 5% of the sample to prevent favourable bias.
Operationalize 18 binary and frequency codes (examples): tone{flattering, disappointed, abusive}, figurative{animal metaphors, kin terms, relational verbs}, agency{pressed, wishes, hardened}, interaction{counterpart references, hears, theirs, generation mentions}. Codebook must state tokenization rules (lemma vs surface), and specify that rarely occurring items (seldom >0 but <5 instances in pilot) are flagged rather than aggregated. pilot-code 10 passages; then double-code 20% of final sample to compute cohen's kappa with target>=0.70; if kappa is limited, retrain coders and re-code the flagged set.
Report sampling diagnostics: effective sample coverage, word-count variance, and any deliberate oversampling of rare phenomena. Weight passage-level counts by passage length when aggregating. Archive raw selections, seed, edition identifiers and a table linking passage indices to their textual disposition so external researchers can reconstruct exactly which excerpts were used. This framework doesnt rely on qualitative impression alone and allows perfectly reproducible quantitative comparisons across editorial variants and generation cohorts.
Define target units: sentences, paragraphs, or dramatic scenes?
Recommendation: treat dramatic scenes as the primary unit for structural and performative analysis; examine sentences for micro-level rhetoric and paragraphs for thematic cohesion – if you want fine-grained tagging, annotate sentences inside scene-level boundaries; if scenes are absent, paragraphs essentially replace them, and one-sentence fragments doesnt qualify as full thematic units unless marked by stage direction or a clear turn.
Criteria: segment by observable shifts in content, speaker, or stage action. Estimated thresholds: break scenes where a contiguous block exceeds ~250 words or ~15 sentences without a clear turning point; mark paragraph breaks at topic shifts or at least every 40–80 words. Use principle-based labels: exposition, confrontation, decision, aftermath. Label crises and turning points explicitly – note beginning, midpoint, and resolution timestamps; flag passages proving character change or mentally destabilizing events (use tag “crises:mental”). Capture gendered registers when manifest: tag womanly or mans rhetorical moves, callings and social demands that shape character motivation. Record lines drawn to disability or disabilities as functional cues for staging or interpretation.
Implementation: create an organization scheme with three tiers (scene.paragraph.sentence IDs) and store metadata fields for created date, authorial voice, speaker, inferred wants, and influences. Use automated rules to take 1) speaker-change plus stage direction = new paragraph, 2) sustained action plus new objective = new scene, 3) punctuation-heavy short turns = sentence unit. Annotate content density (percent dialogue vs. narration); estimated optimal split: scenes 60–80% of analytic weight, paragraphs 15–30%, sentences 5–15% for rhetorical tagging. Practical note: when sthephen-like marginalia or contemporary criticism influences readings, preserve original lineation; rather than collapsing fragments, keep them to maintain a comfortable record of performance history and wonderful specificities of text nature.
Stratify by edition and publication year to limit textual variation
Group the corpus by publisher imprint and publication year, using three-year bins for pre-1930 printings and single-year bins for post-1950 printings; require a minimum of three physical or microfilm copies per stratum and merge adjacent bins when sample count < 3.
Extract metadata fields (imprint, year, place, printing-notes) and create normalized text by removing headers, folio marks and printer ornaments; compute pairwise normalized edit distance on tokenized text and flag strata where median distance > 2% for retention as separate versions, and collapse those with median distance ≤ 0.5% into a single canonical stratum. Baseline thresholds are based on prior projects: 0.5% conservative collapse, 2% conservative split.
When a stratum is occupied by only transcriptions or photocopies, annotate provenance and score confidence; if materials originate from communal lots or multiple binders, tag as “mixed-imprint” and run an additional clustering pass. Use sampling of 5% of pages or 5000 tokens, whichever is larger, to estimate variation; if variation is concentrated in paratext (prefaces, adverts) remove those regions before final decisions to prevent lessening of signal in body text.
Apply governance rules: they will be governed by reproducible scripts that log decisions, owner opinion, and merge history; document every merge with the rationale and a snapshot of all sources. For sociocultural sensitivity, flag texts that reflect twin-shocks (economic disruptions or war) or agricultural shifts in europe that produced lexical drift–terms such as servant, mouth, luck, debilitating, able-bodied, sons, ambition, pride, passed, chase, possessed, grown appear as markers; think about whether such markers are authorial or transmissional before merging.
Operational checklist: 1) ingest metadata and materials; 2) normalize and tokenize; 3) compute distances on sampled tokens; 4) apply thresholds (≤0.5% merge, ≥2% keep separate); 5) if intermediate (0.5–2%) run manual adjudication and record archivist opinion; 6) finalize strata and export canonical texts with provenance file. This will limit undue textual variation while preserving meaningful variants for downstream analysis.
Determine minimum sample size of passages for reliable proportion estimates
Recommendation: use the proportion-sample formula n = (Z² · p · (1−p)) / E²; for a conservative default set p=0.5 and Z=1.96 (95% CI) – that yields n = 384 for E = 0.05, n ≈ 1,068 for E = 0.03, and n = 2,401 for E = 0.02.
Steps with concrete values: 1) pick confidence level (90% Z=1.645 → n≈271 at E=0.05; 99% Z=2.576 → n≈664 at E=0.05). 2) pick target margin E (expressed as proportion). 3) estimate p from pilot data; if unknown use 0.5. 4) compute n0 with the formula and always round up. 5) apply finite-population correction when population N is limited: n_adj = n0 / (1 + (n0 − 1)/N) (example: N=2,000 and n0=384 → n_adj≈323).
Adjustments: multiply n0 by design effect (DEFF) for clustering/annotation dependence (example DEFF=1.5 → 384→576). For low prevalence use p·(1−p) in the formula: if p=0.10 at 95% and E=0.05 → n≈139; if p=0.01 and E=0.05 → n≈16, but require minimum observed positives (rule-of-thumb) of at least 30 positive cases to avoid unstable variance estimates – therefore if p≈0.01 plan for at least 30/0.01 = 3,000 passages to expect ≈30 positives.
Practical cutoffs: absolute minimum total passages = max(30 positives + 30 negatives, computed n from formula after adjustments). If annotations are costly, prefer E=0.05 with DEFF estimate and finite-population correction rather than forcing very small E. Track realized p after data collection and recalc required n to decide whether to continue sampling.
Examples with required keywords for documentation: annotators reported enjoyment of arousing excerpts, some truly started to feel a debilitating reaction at night; one subject suffers and craves solitude, another shows contempt or adopt a dismissive tone; the passage itself was fortunate where subjects described scenes on the streets that caused humiliation or had been flagged whereon moderators note services interrupted after incidents. Annotators stubbornly resisted defeat when categories were distinct; cant terms were promoted to a universal tag, moreover the real preferred labels feed model priors and wonder at how robust estimates become when sample size meets these formulas.
Choose selection procedure: systematic with random start, simple random, or purposive
Recommendation: For quantitative prevalence estimation with an ordered frame and N≥200 use systematic sampling with a random start (provides spatial/temporal spread and predictable variance); for small frames (N<200) or when exact equal-probability selection is required use simple random sampling; for targeted hypothesis-testing, pilot case studies, or expert informant work use purposive selection with explicit inclusion criteria and a documented rank list.
Systematic with random start – concrete steps and example: compute interval k = floor(N/n). Generate a single uniform random integer r in [1,k] (use reproducible seed). Select units r, r+k, r+2k … until n reached. Example: N=1,200, n=100 → k=12; if r=7 select IDs 7,19,31,…,1183. Check list periodicity: if periodic patterns in school registers or rostered shifts align with k, rotate frame or switch to simple random. Use audit metrics: compare sample age, sex (males/females) distribution to frame; if observed proportions deviate >5 percentage points, investigate nonresponse or defects in the frame.
Simple random – concrete steps, tools, and reproducibility: compile exhaustive frame with stable unique IDs 1..N; draw n unique integers via RNG (R: sample(), Python: random.sample(), Excel: RAND() and top-n with fixed seed). Recommended when N≤500 or when selection must be defensible against accusations of bias. For secure reproducibility store seed and script. Simple random increases variance relative to well-implemented systematic when population has spatial autocorrelation but avoids systematic periodicity problems.
Purposive – recommended uses and limits: select when researching specific arts programs, educational cultivation practices, niche preferences, or crisis response where representativeness is secondary. Define explicit inclusion/exclusion criteria, produce a ranked list of candidates (rank by domain expertise, availability, or severity of issue), and set target quotas (typical qualitative range 10–50 participants). Document rationale for each selection and record witness statements to justify choices. Expect selection bias; treat findings as contextual and avoid extrapolating prevalence.
| Procedure | Найкраще підходить для | Implementation | Risks & mitigation |
|---|---|---|---|
| Systematic with random start | Large ordered frames (N≥200); surveys needing spread | k=floor(N/n); choose r∈[1,k]; select r + t·k; log seed | Periodicity bias – check for patterns; if present randomize start and segment frame |
| Simple random | Small frames, audit samples, equal-probability requirement | Assign IDs, draw n via RNG, store seed and code | Higher logistical cost for large N; mitigate with stratification |
| Purposive | Qualitative studies, expert interviews, hard-to-reach groups | Create selection criteria, rank candidates, set quotas (10–50) | Selection bias; mitigate with transparency, supplementary random sub-sample |
Operational recommendations: stratify systematic or simple random by key variables (school, sex – track males separately if relevant, age bands) when heterogeneity is high. Monitor response rates daily; stubbornly low response burns sample quality and will render variance estimates unreliable. If nonresponse clusters create unpleasant bias, implement replacement rules pre-specified in protocol rather than convenience swaps.
Documentation requirements: record frame creation date, actual N, n targeted and achieved, seed used, method of random number generation, and a short narrative of selection discussions. For purposive samples list the criteria that caused a candidate (for example: arts instructor, witness to crises, friend of affected family, or a case named jane in qualitative notes) and the reason for their rank. Include a short defects log documenting missing IDs, duplicates, or anomalies attributed to cultivation/fashion effects or administrative burns.
Quality checks and thresholds: acceptable deviation between sample and frame on primary demographics ≤5 percentage points; design effect estimates and intraclass correlation should be calculated post-hoc; flag samples where variance inflation turns estimates unstable. If issues persist, raise a data review meeting within 48 hours to decide whether to secure additional draws or switch methods.
Ethical and practical notes: purposive selection may require sacrifices in generalizability but yields depth; secure informed consent and document response patterns as witness to selection influence. For transparency cite any protocol alteration and the individual (for example, afifi) responsible for approving changes; essentially keep all selection steps auditable.
Coding Scheme and Reliability Sampling for Thematic Quantification
Recommendation: double-code 20% of all units or a minimum of 200 units (whichever is larger); require Cohen’s kappa ≥ 0.75 and Krippendorff’s alpha ≥ 0.80 before reporting theme-level statistics.
Unit of analysis and codebook structure:
- Unit: paragraph or speaker turn; choose one and keep it organized across the dataset.
- Codebook format: code name, operational definition, examples, counter-examples, decision rule for multi-label cases, and an explicit “unknown” category for unclassifiable items.
- Version control: store changes with timestamp and author; record why a code was changed and how prior labels were recoded.
Coding categories (minimal working set; expand with pilot data):
- Economic-inducement – behaviors described as inducement or advertisement for material gain; examples and threshold counts required for assignment.
- Coercion – explicit pressure or threat; code only when coercion is the primary motive, not when merely implied.
- Habitual-patterns – repeated or habitual actions termed as organized routines; include age-long routines and habitual language.
- Becoming/identity – passages about becoming or changed status (e.g., new name, changed role).
- Leisure/enjoyment – statements of enjoyment or stimulated pleasure, distinct from instrumental motives.
- Socioeconomic-status – flags for unemployed, extreme poverty, or job-related trouble; record as attributes, not themes.
- Ambiguity – “unknown” and mysterious passages that cannot be reliably assigned; label for later qualitative follow-up.
Training protocol and coder qualifications:
- Training length: 4 hours initial workshop + 50 practice excerpts per coder with feedback.
- Calibration: consensus meeting after first 50 double-coded items; record decisions and update codebook.
- Refresher: 1-hour recalibration after each 500 units or when kappa drops below threshold.
- Adjudication: third coder resolves ties; adjudication outcomes must be logged with short rationale informing future rules.
Reliability sampling strategy and sample-size calculations:
- Primary rule: double-code 20% of the corpus or at least Nmin = 200 units. Example: dataset of 2,000 units → double-code 400 units.
- For small corpora (<500 units): double-code at least 100 units or 25% of corpus, whichever is larger.
- To estimate proportion agreement with ±5% margin at 95% confidence, use n ≈ (1.96^2 * p*(1−p))/d^2; with p=0.80 → n≈250. Use this when precise agreement CI is required.
- Rare-code strategy: identify codes with expected prevalence <5%; oversample those strata by factor 2–3 to secure ≥50 double-coded exemplars per rare code.
- Stratified selection: stratify by key attributes (age-long themes, socioeconomic-status flags, genre) so the reliability sample reflects thematic heterogeneity rather than being clustered.
Agreement metrics and thresholds:
- Cohen’s kappa for pairwise reliability: report kappa and percent agreement; accept continuation at kappa ≥ 0.75 and percent agreement ≥ 80%.
- Krippendorff’s alpha for multiple coders or non-binary data: require alpha ≥ 0.80 for final analyses.
- Report prevalence index and bias index alongside kappa to clarify interpretation when codes arent equally frequent.
- If kappa between 0.60 and 0.74, run targeted retraining and re-code a fresh random subset of 100 units before proceeding.
Disagreement resolution and drift control:
- Log each disagreement with code pair, exemplar text, and adjudicator decision; use these logs to expand the codebook and remove mysterious edge cases.
- Conduct monthly drift checks: random 50-unit sample double-coded; if agreement falls below threshold, schedule retraining within one week.
- When multiple coders consistently refuse a code assignment, re-evaluate definition and consider merging or splitting categories rather than forcing artificial distinctions.
Reporting requirements and quality indicators:
- Publish: number of units double-coded, percent double-coded, kappa, alpha, percent agreement, CI for agreement, and details on oversampling of rare codes.
- Include: how many code definitions were changed hitherto, what was the inducement for change, and how prior labels were reclassified.
- Provide sample excerpts for each code name to allow readers to judge fairness and replicability of assignments.
Practical examples and quick checks:
- Example 1: dataset 5,000 units → double-code 1,000 units; if extreme imbalance in one theme (2%), ensure at least 50 double-coded exemplars for that theme via targeted sampling.
- Example 2: two coders, initial kappa 0.68 → conduct 2-hour recalibration, re-code 150 new units; if kappa then ≥0.75, proceed; if not, add a third coder for adjudication.
- Quick diagnostic: if disagreement clusters on value-laden codes (coercion vs inducement), add explicit decision rules, additional examples, and a forced-choice checkbox for primary vs secondary motive.
Final operational notes:
- Record demographics and contextual attributes that might explain coder variance in regards to interpretation.
- Avoid collapsing valid distinctions merely to raise agreement; document any compromises and why they are believed necessary.
- Maintain coder morale: recognize that some passages are ambiguous and that coding should not compel coders to refuse to label clear content; where coders arent confident, use the “unknown” tag for later review.
Operationalize “marriage as trade” metaphors into discrete, testable codes

Implement a 12-code scheme and annotate texts at sentence level: Price, Barter, Weapon, Tribute, Reward, Prevention, Identity, Jealousy, Evasiveness, Taking, Beaten, Social-Status. Each code has a binary presence flag and a strength score (0–3) based on frequency and emphasis; calls to action or direct appraisal increase strength by +1.
Define each code with lexical anchors and concrete threshold rules. Price: tokens such as price, cost, value, fee – present if ≥1 anchor in a 250-word window; dominant if strength ≥2. Barter: barter, exchange, give-and-take, taking – present if explicit reciprocity or quid-pro-quo framing appears. Weapon: weapon, strike, attack – mark when language implies coercion; require at least one violent metaphor plus contextual threat. Tribute: tribute, payment, tribute-bearing – flagged when obligation or tribute is described. Reward: rewarded, reward, prize – flagged when benefit is promised contingent on action. Prevention: prevention, block, stop – mark preventative framing that limits choice. Evasiveness: evasiveness, evasive, avoidance – mark if speaker avoids direct attribution of motives. Jealousy: jealous, envy – mark emotional rivalry. Identity: identity, status, attainments – mark references tying personhood to exchange outcomes.
Operational coding rules: annotate target sentence and ±1 sentence context; if conflicting codes appear then annotate both and record co-occurrence. Create a codebook with three example passages per code and a negative control passage for each to reduce false positives. Include metadata fields: region (e.g., africa), speaker gender (female/male/unknown), class cue (working-man, merchant, elite), source-type (press, private letter, legal). For digital sources include raw URL token when cited (use httpswwwmohgovet as example token for health-related documents) and record whether source was included in original corpus.
Reliability and adjudication: double-code 20% of corpus; target Cohen’s kappa ≥ 0.70 for each code. When kappa < 0.70, run adjudication steps: 1) compare disagreements, 2) refine anchor list, 3) re-code the sample. Record inter-rater confusion matrix and update anchors until improvement. Use plentiful test samples across genres to avoid narrow sampling bias and prevent overfitting.
Quantitative metrics and analysis plan: compute prevalence per 10k words and co-occurrence matrices; report odds ratios linking specific codes to outcomes (e.g., frequency of barter metaphors predicts references to attainments among female speakers). Model count data with negative binomial regression, control for region and class; then test hypotheses about harm by regressing mentions of harm, beaten or weapon metaphors on social variables. Report effect sizes with 95% CI and p-values adjusted for multiple comparisons.
Validity checks: triangulate with social-network indicators (friends, kin references) and behavioral records where available; use sentiment and syntactic parsers to validate coders’ labels. Monitor content showing impossibility or narrowing of choice language as a validity signal for exchange-framing. Track eagerness and calls as proximate markers of agency and rewarded expectations. Flag jealous or identity-focused passages for qualitative follow-up.
Ethics and reporting: document prevention of harm in annotation protocol; anonymize personal data and record when physical coercion (beaten) or tribute demands appear. Publish the final codebook, example-coded corpus segments, and annotation steps so others can reproduce prevalence estimates and test extensions to new datasets of marriages and social exchange discourse.
Marriage as a Trade — Cicely Hamilton (1909) | Moffat, Yard & Company Edition">
I’m Worried My Mum Is Rushing Into Marriage with Her New Boyfriend — How to Help">
Women’s Education, Marital Violence & Divorce – A Social Exchange Perspective">
10 Ways to Keep Your Relationship Fun – Top Couple Tips">
Why You’re Stuck in Masculine Energy – How to Move into Feminine Energy">
Why Men Feel Threatened by Smart, Successful Women in Dating — Research Reveals">
Embrace the Mud – Health Benefits, Tips & Mud Run Guide">
90 Funny Questions to Ask a Guy That Will Keep Him Laughing">
Cultivating Your Dream Relationship – Step-by-Step Guide to Lasting Love">
Why the Eat Pray Love Myth Is Harmful to Women — My Story After My Husband Died">
18 Signs a Man Wants to Be With You — Ready for a Serious Relationship">