Your Monday list has three dead tasks

Stop rolling the same task into next week

Monday morning, 9 AM. The week is planned. Fifteen items on the list. You feel ready.

You also already know three of those are not happening. You just have not admitted it yet.

One guy on Reddit figured that out, ran the same experiment two weeks in a row, and turned it into a standing ritual he now does every Monday. Here is what he did and how you can steal it.

In partnership with

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads

For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.

LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.

The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.

Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.

The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

Why this actually matters

Most planning fails the same way. We add things to the list. We almost never subtract.

We schedule deep work during the hour before three back-to-back meetings. We add tasks that depend on someone else responding, then forget to account for that. We write "finalize X" without ever deciding what done actually looks like. We move the same task from last week's list to this week's list for the third time in a row and call it planning.

The real problem is not capacity. It is honesty. Most of us know, somewhere in the back of our heads, which items on the list are wishful thinking dressed up as commitments. We just do not want to say it, because saying it out loud feels like giving up.

These tasks do not fail because you are lazy. They fail because they were designed to fail from the start. Bad timing, missing inputs, no clear finish line, zero buffer for the inevitable Tuesday fire drill.

ChatGPT can see all of that. And it will say it out loud when you will not.

Why the model can do what you cannot

You have a stake in the list being right. The model does not. It has never met you, never seen your calendar, and spent zero seconds pretending to be encouraging. That is the feature.

When a human looks at your plan, they either flatter it (your boss, your team) or pile on their own panic (your spouse, your friend who also has too much on). The model does neither. It reads the list as a set of commitments with probabilities attached and calls the ones that do not survive contact with reality.

The Reddit user who started this got three out of three correct the first week. Three out of four the second. He missed one because a meeting got cancelled that the model did not know about. That is a pretty uncomfortable track record for a free tool run in three minutes.

The part that actually changes your week

The predictions are not the point. The reasons are.

"This has no clear definition of done" is a fixable problem if you catch it Monday instead of Thursday. "This task depends on a handoff that has not happened yet" is information you can act on right now. The prediction tells you what is at risk. The reason tells you what to do about it.

Most people stop after step three and argue with the output. The ones who get value go to step four and actually rewrite the week.

How to do it

This takes about three minutes on Monday morning.

Step 1. Write out your full week plan. Every task, every commitment, every "I will try to get to this" item. Do not filter. The unfiltered version is the whole point.

Step 2. Paste the whole thing into ChatGPT. Claude or Gemini work too. The model matters less than the question.

Step 3. Ask one question: "Which of these am I definitely not finishing this week, and why?"

Step 4. Read the response without immediately arguing with it. Your first instinct will be to defend the list. Resist that. The discomfort is the point. Sit with it for thirty seconds before you decide the AI is wrong.

Step 5. Come back Friday and check. Not a vague memory of what you got done, but a side-by-side comparison against what the model flagged.

Meetings that actually lead somewhere

Granola is the AI notepad for people with back-to-back meetings. Take notes your way and Granola turns them into clear summaries, action items, and follow-ups. No bots. No disruptions. Just results.

*Ad

Pro tips

Give it context upfront. Mention your typical meeting load, whether tasks depend on other people, and any known constraints for the week. Something like "I have six hours of meetings Monday and Tuesday, and two of these tasks need responses from people outside my team" gives the model something real to work with.

Do not skip the why. The reasons are more useful than the predictions. "This has no clear definition of done" is a fixable problem on Monday. "This depends on a handoff that has not happened" is an immediate action item. Use the reasons to rewrite tasks, not just delete them.

Try the harder version. Paste only the tasks you are most confident about and ask: "Which of these has a hidden dependency I have not thought through?" That one is genuinely uncomfortable. It surfaces assumptions you buried so deep you forgot they were assumptions.

Use it to trim, not just predict. After you see the flagged items, ask ChatGPT to either cut them or rewrite them so they are actually completable this week. "Send report to client" becomes "send client the three sections that are done and flag what is still missing." Same task, honest scope. A shorter honest list beats a long optimistic one every time.

Keep a running log. After a few weeks, patterns show up. Maybe ChatGPT keeps flagging your Thursday afternoon blocks. Maybe it keeps catching tasks that start with "coordinate with." That is not random. That is your actual workflow telling you something.

The self-fulfilling prophecy concern

Someone in the Reddit comments flagged the obvious objection. If the model says a task will fail, you unconsciously let it fail. Fair concern.

The original poster had the best answer. The things ChatGPT flagged were things he was already quietly afraid of and had not admitted. The model did not create the doubt. It just named it.

There is a difference between a tool that talks you out of something and a tool that confirms what your gut already knew at 7 AM before the coffee kicked in. This is the second kind.

Three situations where this hits hardest

You are ending every week frustrated by the same unfinished tasks rolling into the next one, and you cannot figure out if the problem is planning or effort.

You run a team and half of the weekly priorities reliably slip, but you do not know which ones until Thursday when it is too late to reshuffle.

You are a solo operator and your "plan the week" habit has quietly become "write down everything and hope." You want a reality check that is not going to flatter you.

Conclusion

Here is what we covered today:

  • Most weekly plans fail because we add tasks and almost never subtract. The problem is honesty, not capacity.

  • ChatGPT has no stake in flattering your list. Paste it in, ask which items are not finishing, and the model will tell you, plus the reasons why.

  • The reasons are the actual value. They turn "this will fail" into "fix the dependency before Wednesday" or "rewrite this to something completable."

Your action step this Monday: Open a new chat. Paste in your week. Ask the one question. Read the answer without arguing. Check on Friday whether the model was right.

You do not have to believe it. You do not have to change anything based on it. Just run the experiment once and check on Friday. If it was right about something you already knew, somewhere underneath, and just had not said out loud yet, that is the whole point.

Winning, on-brand ads—without endless prompting

Most AI ad tools generate volume, not quality — and refining output means endless prompt rewrites. With Hightouch Ad Studio, AI gets you 90% of the way there. For the final 10%, use a built-in editor to quickly refine copy and design. Move faster without losing control.

*Ad