In response to MrStonedOne
MrStonedOne wrote:
lummox: how about splitting by each string if the delimiter is a list, and the whole string otherwise.

Then devs have the ability to do both.

Devs already have the ability to do both via regular expressions.
lummox, you've been in dev help before.


honestly, can you tell me that even 50% of devs would be able to competently handle regular expressions?
In response to Lummox JR
By that logic why did you even bother with splittext() in the first place?

I'm pretty sure that most people would except splittext to split on the whole string.
In response to PJB3005
Like I said, it could go two different ways. Tokenizers classically use the method I went with. I do see a rationale for the other way.
i just think splittext should do both, regex is both more complicated to use, has more overhead, and shouldn't be an excuse to limit splittext and jointext

If i had to pick one, i'd want the whole string, so it could flat out replace our current text2list and list2text, as thats how those work.

at /tg/station13 we haven't had the need to do the other way that i know of.
I think trying to give splittext() two very different functionalities is too inconsistent and isn't modular. Instead of trying to compromise, it would make a lot more sense to just have two separate procs, since they will tend to have their own, somewhat different use cases.

What I would suggest is to rename the current proc tokentext(), and make it so that passing an empty string ("") as the delimiter would result in the text being split into a list with every single character as an item. This would better represent the proc's intended use in tokenization.

This frees up the name splittext() for the more general use that we have outlined here. In this way, it's a win for everyone.
it would make a lot more sense to just have two separate procs

i mean it doesn't matter at this point, 510 is out, so no new procs or language features are allowed until 511.

So the only way to get these functionalities is either regex, which has speed implications, or an arg on splittext.
Adding list support and changing the current behavior are both feasible bexause they don't impact the compiled proc code.
+1 for list support
Another +1 for list support
So a list for multiple matches, string for one match, and regex for pattern based match? That'd be amazing.
List support I'm not fussed for, but individualizing the deliminator seems crazy to me, so I'd personally appreciate that change.
Lummox JR resolved issue with message:
By popular demand, splittext() has been changed so that the delimiter is an exact match, rather than a set of characters to match. The delimiter is always case-sensitive.
Page: 1 2