Assessing students' writing can be a challenging activity. To make writing assessment more feasible, researchers have investigated the possibilities of automated essay scoring (AES). Most studies investigating AES have focused on L1 writing or intermediate to advanced L2 writing. In this study we explored the possibilities of using AES with low proficiency L2 English writers. We used a dataset which comprised writing samples from 3166 young L2 English learners who were at the very start of L2 English instruction. All tasks received a score assigned by humans. For automated scoring we experimented with two machine learning methods. First, a feature-based approach for which the dataset was linguistically preprocessed using natural language processing tools. The second approach employed deep learning by fine-tuning various large language models. Because we were particularly interested in the influence of spelling errors, we also created a corrected, spell-checked version of our dataset. Models trained on the uncorrected samples yield the best results. Especially the deep learning approach leads to a satisfying performance with a quadratic weighted kappa above .70. The model which was fine-tuned on an underlying Dutch large language model was superior, which might be linked to the low L2 English proficiency of the young L1 Dutch writers in our sample.