Thread management
Erdöl Biramen
Merging whole threads
Merging selected parts of threads
Example for how it could be implemented:
FYI
It was me who gave Straico the idea for Smart Merge. Here my related Discord messages.
================================================================
What I almost always do when I compare the answers of different models is to feed back the found differences (conflicting statements or things which have not been mentioned) to each model and ask them to reconsider their answers and to then let me know whether they stand behind their answers (and if so, with what reasoning) or whether they change them to be in line with those of the other models.
--------------------------------------------------------------------------------------------
IMHO there should be an option to activate an "auto-compare" feature for the side-by-side panes which should then display a thin vertical bar on the side of the side-by-side panes, with color coding for each line (or at least for each paragraph) signaling either consensus or disagreement with the other side-by-side panes. Green meaning consensus, red meaning disagreement and the intensity of green/red meaning the level of consensus/disagreement. Optionally, the color could be overlayed with a pattern, whereby solid pattern would signal rather higher reliability of the assumed consensus/disagreement and checkerboard pattern (also called tiled or grid pattern) would signal rather lower reliability. Lines or paragraphs of merely conversational nature, containing no relevant facts (e.g. the model rephrasing the prompt, expresses its concern about its morality, apologizing for not having up-to-date information etc.) should be excluded from the comparison and thus not be color-coded.
--------------------------------------------------------------------------------------------
IMHO it would require just one additional calls to whatever model excels in tasks like:
.) comparing different bodies of text to figure out the consenting/conflicting facts/statements
.) extracting entities (e.g. products, services, risks, options etc. depending on what the prompt was about)
.) creating summaries
The comparer would take note of the line-numbers/paragraph numbers where those facts/statements/entities are found in the text, assign a value for the assessment e.g. NULL (missing statement/entity), 0 (conflicting statement), 1 (consenting statement/entity mentioned) and assign a value e.g. 1-5 representing the reliability of the assessment. The rest is just painting the colors/patterns on the vertical bars.
Optionally, a table could be generated, with the de-duplicated list of all statements/facts/entities in rows, the model names in columns, and check-mark in cells where the model made a consenting statement or mentioned an entity, a cross where the model made a conflicting statement and nothing where a model did not make any related statement or did not mention a specific entity.
This feature could be called COMPARER or ARBITER depending on, whether it conducts further research to resolve the conflicting information.
--------------------------------------------------------------------------------------------
IMHO there should be an option to activate an "auto-compare" feature for the side-by-side panes which should then display a thin vertical bar on the side of the side-by-side panes, with color coding for each line (or at least for each paragraph) signaling either consensus or disagreement with the other side-by-side panes. Green meaning consensus, red meaning disagreement and the intensity of green/red meaning the level of consensus/disagreement. Optionally, the color could be overlayed with a pattern, whereby solid pattern would signal rather higher reliability of the assumed consensus/disagreement and checkerboard pattern (also called tiled or grid pattern) would signal rather lower reliability. Lines or paragraphs of merely conversational nature, containing no relevant facts (e.g. the model rephrasing the prompt, expresses its concern about its morality, apologizing for not having up-to-date information etc.) should be excluded from the comparison and thus not be color-coded.
J
Joel Midden
Merged in a post:
Merging threads
Erdöl Biramen
Like e.g. here
FYI
The idea for this feature originates from me.
Here my related messages on Straico's Discord server:
================================================================
What I almost always do when I compare the answers of different models is to feed back the found differences (conflicting statements or things which have not been mentioned) to each model and ask them to reconsider their answers and to then let me know whether they stand behind their answers (and if so, with what reasoning) or whether they change them to be inline with those of the other models.
--------------------------------------------------------------------------------------------
Yeah, I think to keep the coin usage under control, it should be possible for the user to decide, for each new prompt, which previous prompts, file uploads and answers should be considered, maybe by clicking on a checkbox next to each of them. And it should be possible to save these selections as named sets because maybe a few follow up prompts later one might need the same selection again. And also, by default, the selection used for the last prompt should „auto-grow“, meaning the follow up prompts (along with their attachments and answers) should automatically get added to the selection.
TBH this suggestion of mine, while it would help to reduce the coin usage, it would make using Straico cumbersome and thus less attractive compared to using Perplexity and/or You.com with their unlimited plans where one doesn’t need to worry about the costs.
-------------------------------------------------------------------------------------------
IMHO it would require just one additional calls to whatever model excels in tasks like:
.) comparing different bodies of text to figure out the consenting/conflicting facts/statements
.) extracting entities (e.g. products, services, risks, options etc. depending on what the prompt was about)
.) creating summaries
The comparer would take note of the line-numbers/paragraph numbers where those facts/statements/entities are found in the text, assign a value for the assessment e.g. NULL (missing statement/entity), 0 (conflicting statement), 1 (consenting statement/entity mentioned) and assign a value e.g. 1-5 representing the reliability of the assessment. The rest is just painting the colors/patterns on the vertical bars.
Optionally, a table could be generated, with the de-duplicated list of all statements/facts/entities in rows, the model names in columns, and check-mark in cells where the model made a consenting statement or mentioned an entity, a cross where the model made a conflicting statement and nothing where a model did not make any related statement or did not mention a specific entity.
This feature could be called COMPARER or ARBITER depending on, whether it conducts further research to resolve the conflicting information.
--------------------------------------------------------------------------------------------
IMHO there should be an option to activate an "auto-compare" feature for the side-by-side panes which should then display a thin vertical bar on the side of the side-by-side panes, with color coding for each line (or at least for each paragraph) signaling either consensus or disagreement with the other side-by-side panes. Green meaning consensus, red meaning disagreement and the intensity of green/red meaning the level of consensus/disagreement. Optionally, the color could be overlayed with a pattern, whereby solid pattern would signal rather higher reliability of the assumed consensus/disagreement and checkerboard pattern (also called tiled or grid pattern) would signal rather lower reliability. Lines or paragraphs of merely conversational nature, containing no relevant facts (e.g. the model rephrasing the prompt, expresses its concern about its morality, apologizing for not having up-to-date information etc.) should be excluded from the comparison and thus not be color-coded.