Cross-connecting (by CKM).
It's a neural net training problem really. When you design software neural nets to solve a problem (neural nets simulate the brain directly in a simple sense, in terms of the neuron connections and strengthening connections that are used and building associations) you have to train the net on the problem domain. A typical application of a neural net might be facial recognition, and you would train the software by showing it a number of pictures of the subject it was being taught to recognize. It's a little tricky, because it's important to get the size of the training set right. If you show too few pictures of the subject, the net won't have enough information to generalize, and won't recognize other similar pictures. But, perhaps surprisingly, if you show too many pictures of the subject, the net will become "over-trained" and will think it has a perfect and complete understanding of the subject, and nothing can be added to it, so it won't recognize new pictures of the subject very well either, because they are different from the pictures it knows about. Its understanding is over-specific and normal variation messes it up.
Anyway, I think broadly speaking I've been suffering from too few items in the training set, largely because of not having spent decades listening to this music, and although I have worked on lots of songs, because I've worked on them sequentially, there has been limited carry-over of pattern recognition from one song to the next. Each one has been a whole new adventure. Which has been fun, but it's certainly being useful in a different way to have the patterns from a dozen songs all in my mind at once, because there are cross-connections every which way, and it makes it easy to see them.
Sometimes, in another sense, I get too many items in the training set, like when I want C to specify every microscopic dynamic in a line, but it just doesn't matter all that much. There are situations where there is some essential characteristic that needs to be kept, but the tiny details can be done many different ways, but I don't distinguish those two categories. From a neural net perspective, it's desirable for the net to identify the essential characteristic, and be flexible about the rest of it, but is overtraining when the net perceives the non-essential characteristics as part of the essential identity. But how to tell the difference? That's the tricky training set size and content thing, and even with software, de facto it gets worked out on a trial and error basis until you come up with the right approach for the problem domain.
I wasn't thinking about any of this, this morning though. I was thinking lift & drop, putting weight into keys, using the weight of my arm to roll into notes and phrases, and keeping relaxed, and trying to always make phrases be going somewhere dynamically, never just sitting there, and just trying to be totally focused on whatever we were working on. That kind kind of focus just makes me so happy, both at the time, and thinking about it later.
Austinato: We had a further dialogue to kind of tie things together—Thank you dear reader: