Posted on Leave a comment

Lewisburg boys blank Devon Prep, advance to Class AA state semifinals – sungazette.com

Nov 8, 2025
RALPH WILSON/Sun-Gazette Correspondent Lewisburg’s Isaac Ayres, shown in the District 4 final, had an assist in Saturday’s PIAA Class AA quarterfinal against Devon Prep as the Green Dragons beat the Tide, 2-0, to advance to the state semifinals.
DILLSBURG –Regardless of whether or not Lewisburg holds a lead in a game, the Green Dragons and their coaching staff approach every half the same way: it’s a 0-0 game and go out and execute.
So, while Lewisburg went into the break up by a goal, the Green Dragons battled as if it was a scoreless game for the final 40 minutes. Lewisburg kept pressing and put in a second-half goal in the final minute to defeat Devon Prep, 2-0, and head to the PIAA Class AA semifinals after Saturday’s win at Northern High School in Dillsburg.
“(I told guys at halftime) it was 0-0 and we needed to work hard for 40 minutes, don’t have any regrets and the other team made some adjustments,” Lewisburg coach Ben Kettlewell said. “I thought they controlled the second half, we didn’t give up anything dangerous, Gabe made a couple saves, but we had to defend really hard, especially the last 15 minutes. They started to really press us and we weren’t getting what we wanted.”
Lewisburg’s second-half goal came in the closing seconds when Jack Johnson was able to put away a goal for the 2-0 victory. The Green Dragons went ahead 1-0 in the first half with 16:51 to play until halftime when Isaac Ayres found Dylan Vogel for a goal.
“It was huge. Dylan Vogel, he comes off the bench and put one in on the far post off Isaac Ayres,” Kettlewell said. “It lifted us. We were kind of in control a little bit at that moment, but it kind of proved like hey what we’re doing is the right thing and it really helped.”
And getting that spark off the bench was something Kettlewell was glad to see.
“It’s always good when your bench kind of gives you a lift knowing when you put guys in, your level of play can get better,” Kettlewell said.
Lewisburg out-shot Devon Prep, 11-8, as Gabe Pawling ended the game with five saves for the Green Dragons, who improve to 21-0-1. In the first half, Devon Prep was limited to just two shots on goal as the Green Dragons kept possession and didn’t allow the Tide to get looks at the goal to attempt to tie it.
“That was really huge. I think Gabe really only had to make one save on a really good shot that Gabe had to make a stop on, but we really limited their shots to make stuff Gabe could see and set his feet,” Kettlewell said. “That’s him just organizing our defense to limit what they wanted to do.”
The shutout is the 18th of the season for Pawling and 41st of his career.
Plain and simple, the Green Dragons are happy to be moving on and keeping their season going in November.
“We’re extremely proud that we were able to get a win,” Kettlewell said. “Any win is a good win.”
Following the shutout win against District 12 champion Devon Prep, Lewisburg will be set for a matchup against District 1 champion Faith Christian, who knocked off No. 1-ranked Northwestern Lehigh in the quarterfinals, 2-1, in overtime. Northwestern Lehigh, the District 11 champions, were the defending state champions as well.
“Any game in the state semifinals is going to be a hard one,” Kettlewell said. “We’re expecting hopefully a good match that we can come out and find our moments to make a couple plays and stay organized and be relentless.”
Lewisburg 2, Devon Prep 0
(PIAA CLASS AA QUARTERFINALS)
L–Dylan Vogel (Isaac Ayres), 16:51. L–Jack Johnson, 0:02.
Shots: L 11, DP 8. Saves: L 5 (Gabe Pawling), DP 5.
Records: Devon Prep (13-5-1), Lewisbug (21-0-1). Next game: Lewisburg vs. Faith Christian, Tuesday, TBD.
Wilkes-Barre 42, Williamsport 14 (DISTRICT 2-4 CLASS 6A CHAMPIONSHIP) Williamsport 0 7 0 …

Copyright © 2025 Sun-Gazette, LLC | https://www.sungazette.com | 252 W. Fourth Street, Williamsport, PA 17703 | 570-326-1551

source

Posted on Leave a comment

Powerball winning numbers for $467 million jackpot drawing on Saturday, Nov. 8 – USA Today

The Powerball jackpot rose to $467 million for the lottery drawing on Saturday, Nov. 8, after no one on Wednesday, Nov. 5, took home the top prize.
If a ticket matches all five numbers plus the Powerball in the 11 p.m. ET drawing, the winner can choose a one-time cash payment of $218.1 million before taxes.
Top Ten Powerball Jackpots
Check below to see the winning numbers for the Powerball drawing on Nov. 8.
The winning numbers for Saturday, Nov. 8, will be posted here once drawn.
Winning lottery numbers are sponsored byJackpocket, the official digital lottery courier of the USA TODAY Network.
Any Powerball winners will be posted here once announced by lottery officials.
To find the full list of previous Powerball winners,click the link to the lottery’s website.
The next drawing will take place on Monday, Nov. 10, just after 11 p.m. ET.
To play the Powerball, you have to buy a ticket for $2. You can do this at a variety of locations, including your local convenience store, gas station, or even grocery store. In some states, Powerball tickets can be bought online.
Once you have your ticket, you need to pick six numbers. Five of them will be white balls with numbers from 1 to 69. The red Powerball ranges from 1 to 26. People can also add a “Power Play” for $1, which increases the winning for all non-jackpot prizes.
The “Power Play” multiplier can multiply winnings by: 2X, 3X, 4X, 5X, or 10X.
If you are feeling unlucky or want the computer to do the work for you, the “Quick Pick” option is available, where computer-generated numbers will be printed on a Powerball ticket. To win the jackpot, players must match all five white balls in any order and the red Powerball.
Powerball drawings are held on Monday, Wednesday and Saturday nights. If no one wins the jackpot, the cash prize will continue to tick up.
Tickets can be purchased in person at gas stations, convenience stores and grocery stores. Some airport terminals may also sell lottery tickets.
You can also order tickets online throughJackpocket, the official digital lottery courier of the USA TODAY Network, in these U.S. states and territories: Arizona, Arkansas, Colorado, Idaho, Maine, Massachusetts, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, New York, Oregon, Puerto Rico, Washington D.C. and West Virginia. The Jackpocket app lets you pick your lottery game and numbers, place your order, view your ticket, and collect your winnings — all on your phone or home computer.
Jackpocket is the official digital lottery courier of the USA TODAY Network. Gannett may earn revenue for audience referrals to Jackpocket services. Must be 18+, 21+ in AZ and 19+ in NE. Not affiliated with any State Lottery. Gambling Problem? Call 1-877-8-HOPE-NY or text HOPENY (467369) (NY); 1-800-327-5050(MA); 1-877-MYLIMIT (OR); 1-800-981-0023 (PR); 1-800-GAMBLER (all others). Visitjackpocket.com/tos for full terms.
Fernando Cervantes Jr. is a trending news reporter for USA TODAY. Reach him at fernando.cervantes@gannett.com and follow him on X @fern_cerv_.

source

Posted on Leave a comment

Winning Powerball numbers in Nov. 8 lottery drawing last night: Anyone win Powerball jackpot? – IndyStar

The Powerball jackpot continues to grow after no one matched all six Powerball numbers to win Wednesday’s drawing.
Grab your tickets and check your numbers to see if you’re the game’s newest millionaire.
Here are the numbers for the Saturday, Nov. 8, Powerball jackpot worth an estimated $467 million with a cash option of $218.1.
Saturday night’s drawing will take place at 10:59 p.m. ET. Winning numbers will be posted here. The winning numbers for Wednesday night’s drawing were 9, 17, 29, 61, 66, with a Powerball of 26. The Power Play was 5x.
Results are pending.
Results are pending.
The Powerball jackpot for Saturday, Nov. 8, 2025, rises to $467 million with a cash option of $218.1 million, according to powerball.com.
Drawings are held three times per week at approximately 10:59 p.m. ET every Monday, Wednesday, and Saturday.
You only need to match one number in Powerball to win a prize. However, that number must be the Powerball worth $4. Visit powerball.com for the entire prize chart.
Matching two numbers won’t win anything in Powerball unless one of the numbers is the Powerball. A ticket matching one of the five numbers and the Powerball is also worth $4. Visit powerball.com for the entire prize chart.
A single Powerball ticket costs $2. Pay an additional $1 to add the Power Play for a chance to multiply all Powerball winnings except for the jackpot. Players can also add the Double Play for an additional $1 to have a second chance at winning $10 million.
Friday night’s winning numbers were 16, 21, 23, 48, 70, and the Mega Ball was 5.
The Mega Millions jackpot for Tuesday’s drawing grows to an estimated $900 million with a cash option of $415.3 million after no Mega Millions tickets matched all six numbers to win the jackpot, according to megamillions.com.
Here is the list of 2025 Powerball jackpot wins, according to powerball.com:
Here are the all-time top 10 Powerball jackpots, according to powerball.com:
Here are the nation’s all-time top 10 Powerball and Mega Millions jackpots, according to powerball.com:
Chris Sims is a digital content producer at Midwest Connect Gannett. Follow him on Twitter: @ChrisFSims.

source

Posted on Leave a comment

Package and deploy classical ML and LLMs easily with Amazon SageMaker, part 1: PySDK Improvements – Amazon Web Services (AWS)


Search
AWS Blogs
Amazon SageMaker is a fully managed service that enables developers and data scientists to quickly and effortlessly build, train, and deploy machine learning (ML) models at any scale. SageMaker makes it straightforward to deploy models into production directly through API calls to the service. Models are packaged into containers for robust and scalable deployments. Although it provides various entry points like the SageMaker Python SDK, AWS SDKs, the SageMaker console, and Amazon SageMaker Studio notebooks to simplify the process of training and deploying ML models at scale, customers are still looking for better ways to deploy their models for playground testing and to optimize production deployments.
We are launching two new ways to simplify the process of packaging and deploying models using SageMaker.
In this post, we introduce the new SageMaker Python SDK ModelBuilder experience, which aims to minimize the learning curve for new SageMaker users like data scientists, while also helping experienced MLOps engineers maximize utilization of SageMaker hosting services. It reduces the complexity of initial setup and deployment, and by providing guidance on best practices for taking advantage of the full capabilities of SageMaker. We provide detailed information and GitHub examples for this new SageMaker capability.
The other new launch is to use the new interactive deployment experience in SageMaker Studio. We discuss this in Part 2.
Deploying models to a SageMaker endpoint entails a series of steps to get the model ready to be hosted on a SageMaker endpoint. This involves getting the model artifacts in the correct format and structure, creating inference code, and specifying essential details like the model image URL, Amazon Simple Storage Service (Amazon S3) location of model artifacts, serialization and deserialization steps, and necessary AWS Identity and Access Management (IAM) roles to facilitate appropriate access permissions. Following this, an endpoint configuration requires determining the inference type and configuring respective parameters such as instance types, counts, and traffic distribution among model variants.
To further help our customers when using SageMaker hosting, we introduced the new ModelBuilder class in the SageMaker Python SDK, which brings the following key benefits when deploying models to SageMaker endpoints:
Overall, SageMaker ModelBuilder simplifies and streamlines the model packaging process for SageMaker inference by handling low-level details and provides tools for testing, validation, and optimization of endpoints. This improves developer productivity and reduces errors.
In the following sections, we deep dive into the details of this new feature. We also discuss how to deploy models to SageMaker hosting using ModelBuilder, which simplifies the process. Then we walk you through a few examples for different frameworks to deploy both traditional ML models and the foundation models that power generative AI use cases.
The new ModelBuilder is a Python class focused on taking ML models built using frameworks, like XGBoost or PyTorch, and converting them into models that are ready for deployment on SageMaker. ModelBuilder provides a build() function, which generates the artifacts according the model server, and a deploy() function to deploy locally or to a SageMaker endpoint. The introduction of this feature simplifies the integration of models with the SageMaker environment, optimizing them for performance and scalability. The following diagram shows how ModelBuilder works on a high-level.

The ModelBuilder class provide different options for customization. However, to deploy the framework model, the model builder just expects the model, input, output, and role:
The SchemaBuilder class enables you to define the input and output for your endpoint. It allows the schema builder to generate the corresponding marshaling functions for serializing and deserializing the input and output. The following class file provides all the options for customization:
However, in most cases, just sample input and output would work. For example:
By providing sample input and output, SchemaBuilder can automatically determine the necessary transformations, making the integration process more straightforward. For more advanced use cases, there’s flexibility to provide custom translation functions for both input and output, ensuring that more complex data structures can also be handled efficiently. We demonstrate this in the following sections by deploying different models with various frameworks using ModelBuilder.
In this example, we use ModelBuilder to deploy XGBoost model locally. You can use Mode to switch between local testing and deploying to a SageMaker endpoint. We first train the XGBoost model (locally or in SageMaker) and store the model artifacts in the working directory:
Then we create a ModelBuilder object by passing the actual model object, the SchemaBuilder that uses the sample test input and output objects (the same input and output we used when training and testing the model) to infer the serialization needed. Note that we use Mode.LOCAL_CONTAINER to specify a local deployment. After that, we call the build function to automatically identify the supported framework container image as well as scan for dependencies. See the following code:
Finally, we can call the deploy function in the model object, which also provides live logging for easier debugging. You don’t need to specify the instance type or count because the model will be deployed locally. If you provided these parameters, they will be ignored. This function will return the predictor object that we can use to make prediction with the test data:
Optionally, you can also control the loading of the model and preprocessing and postprocessing using InferenceSpec. We provide more details later in this post. Using LOCAL_CONTAINER is a great way to test out your script locally before deploying to a SageMaker endpoint.
Refer to the model-builder-xgboost.ipynb example to test out deploying both locally and to a SageMaker endpoint using ModelBuilder.
In the following examples, we showcase how to use ModelBuilder to deploy traditional ML models.
Similar to the previous section, you can deploy an XGBoost model to a SageMaker endpoint by changing the mode parameter when creating the ModelBuilder object:
Note that when deploying to SageMaker endpoints, you need to specify the instance type and instance count when calling the deploy function.
Refer to the model-builder-xgboost.ipynb example to deploy an XGBoost model.
You can use ModelBuilder to serve PyTorch models on Triton Inference Server. For that, you need to specify the model_server parameter as ModelServer.TRITON, pass a model, and have a SchemaBuilder object, which requires sample inputs and outputs from the model. ModelBuilder will take care of the rest for you.
Refer to model-builder-triton.ipynb to deploy a model with Triton.
In this example, we show you how to deploy a pre-trained transformer model provided by Hugging Face to SageMaker. We want to use the Hugging Face pipeline to load the model, so we create a custom inference spec for ModelBuilder:
We also define the input and output of the inference workload by defining the SchemaBuilder object based on the model input and output:
Then we create the ModelBuilder object and deploy the model onto a SageMaker endpoint following the same logic as shown in the other example:
Refer to model-builder-huggingface.ipynb to deploy a Hugging Face pipeline model.
In the following examples, we showcase how to use ModelBuilder to deploy foundation models. Just like the models mentioned earlier, all that is required is the model ID.
If you want to deploy a foundation model from Hugging Face Hub, all you need to do is pass the pre-trained model ID. For example, the following code snippet deploys the meta-llama/Llama-2-7b-hf model locally. You can change the mode to Mode.SAGEMAKER_ENDPOINT to deploy to SageMaker endpoints.
For gated models on Hugging Face Hub, you need to request access via Hugging Face Hub and use the associated key by passing it as the environment variable HUGGING_FACE_HUB_TOKEN. Some Hugging Face models may require trusting remote code. It can be set as an environment variable as well using HF_TRUST_REMOTE_CODE. By default, ModelBuilder will use a Hugging Face Text Generation Inference (TGI) container as the underlying container for Hugging Face models. If you would like to use AWS Large Model Inference (LMI) containers, you can set up the model_server parameter as ModelServer.DJL_SERVING when you configure the ModelBuilder object.
A neat feature of ModelBuilder is the ability to run local tuning of the container parameters when you use LOCAL_CONTAINER mode. This feature can be used by simply running tuned_model = model.tune().
Refer to demo-model-builder-huggingface-llama2.ipynb to deploy a Hugging Face Hub model.
Amazon SageMaker JumpStart also offers a number of pre-trained foundation models. Just like the process of deploying a model from Hugging Face Hub, the model ID is required. Deploying a SageMaker JumpStart model to a SageMaker endpoint is as straightforward as running the following code:
For all available SageMaker JumpStart model IDs, refer to Built-in Algorithms with pre-trained Model Table. Refer to model-builder-jumpstart-falcon.ipynb to deploy a SageMaker JumpStart model.
ModelBulder allows you to use the new inference component capability in SageMaker to deploy models. For more information on inference components, see Reduce Model Deployment Costs By 50% on Average Using SageMaker’s Latest Features. You can use inference components for deployment with ModelBuilder by specifying endpoint_type=EndpointType.INFERENCE_COMPONENT_BASED in the deploy() method. You can also use the tune() method, which fetches the optimal number of accelerators, and modify it if required.
Refer to model-builder-inference-component.ipynb to deploy a model as an inference component.
The ModelBuilder class allows you to customize model loading using InferenceSpec.
In addition, you can control payload and response serialization and deserialization and customize preprocessing and postprocessing using CustomPayloadTranslator. Additionally, when you need to extend our pre-built containers for model deployment on SageMaker, you can use ModelBuilder to handle the model packaging process. In this following section, we provide more details of these capabilities.
InferenceSpec offers an additional layer of customization. It allows you to define how the model is loaded and how it will handle incoming inference requests. Through InferenceSpec, you can define custom loading procedures for your models, bypassing the default loading mechanisms. This flexibility is particularly beneficial when working with non-standard models or custom inference pipelines. The invoke method can be customized, providing you with the ability to tailor how the model processes incoming requests (preprocessing and postprocessing). This customization can be essential to ensure that the inference process aligns with the specific needs of the model. See the following code:
The following code shows an example of using this class:
When invoking SageMaker endpoints, the data is sent through HTTP payloads with different MIME types. For example, an image sent to the endpoint for inference needs to be converted to bytes at the client side and sent through the HTTP payload to the endpoint. When the endpoint receives the payload, it needs to deserialize the byte string back to the data type that is expected by the model (also known as server-side deserialization). After the model finishes prediction, the results need to be serialized to bytes that can be sent back through the HTTP payload to the user or client. When the client receives the response byte data, it needs to perform client-side deserialization to convert the bytes data back to the expected data format, such as JSON. At a minimum, you need to convert the data for the following (as numbered in the following diagram):
The following diagram shows the process of serialization and deserialization during the invocation process.

In the following code snippet, we show an example of CustomPayloadTranslator when additional customization is needed to handle both serialization and deserialization in the client and server side, respectively:
In the demo-model-builder-pytorch.ipynb notebook, we demonstrate how to easily deploy a PyTorch model to a SageMaker endpoint using ModelBuilder with the CustomPayloadTranslator and the InferenceSpec class.
If you want to stage the model for inference or in the model registry, you can use model.create() or model.register(). The enabled model is created on the service, and then you can deploy later. See the following code:
SageMaker provides pre-built Docker images for its built-in algorithms and the supported deep learning frameworks used for training and inference. If a pre-built SageMaker container doesn’t fulfill all your requirements, you can extend the existing image to accommodate your needs. By extending a pre-built image, you can use the included deep learning libraries and settings without having to create an image from scratch. For more details about how to extend the pre-built containers, refer to SageMaker document. ModelBuilder supports use cases when bringing your own containers that are extended from our pre-built Docker containers.
To use your own container image in this case, you need to set the fields image_uri and model_server when defining ModelBuilder:
Here, the image_uri will be the container image ARN that is stored in your account’s Amazon Elastic Container Registry (Amazon ECR) repository. One example is shown as follows:
When the image_uri is set, during the ModelBuilder build process, it will skip auto detection of the image as the image URI is provided. If model_server is not set in ModelBuilder, you will receive a validation error message, for example:
As of the publication of this post, ModelBuilder supports bringing your own containers that are extended from our pre-built DLC container images or containers built with the model servers like Deep Java Library (DJL), Text Generation Inference (TGI), TorchServe, and Triton inference server.
When running ModelBuilder.build(), by default it automatically captures your Python environment into a requirements.txt file and installs the same dependency in the container. However, sometimes your local Python environment will conflict with the environment in the container. ModelBuilder provides a simple way for you to modify the captured dependencies to fix such dependency conflicts by allowing you to provide your custom configurations into ModelBuilder. Note that this is only for TorchServe and Triton with InferenceSpec. For example, you can specify the input parameter dependencies, which is a Python dictionary, in ModelBuilder as follows:
We define the following fields:
If the same module is specified in multiple places, custom will have highest priority, then requirements, and auto will have lowest priority. For example, let’s say that during autodetect, ModelBuilder detects numpy==1.25, and a requirements.txt file is provided that specifies numpy>=1.24,<1.26. Additionally, there is a custom dependency: custom = ["numpy==1.26.1"]. In this case, numpy==1.26.1 will be picked when we install dependencies in the container.
When you’re done testing the models, as a best practice, delete the endpoint to save costs if the endpoint is no longer required. You can follow the Clean up section in each of the demo notebooks or use following code to delete the model and endpoint created by the demo:
The new SageMaker ModelBuilder capability simplifies the process of deploying ML models into production on SageMaker. By handling many of the complex details behind the scenes, ModelBuilder reduces the learning curve for new users and maximizes utilization for experienced users. With just a few lines of code, you can deploy models with built-in frameworks like XGBoost, PyTorch, Triton, and Hugging Face, as well as models provided by SageMaker JumpStart into robust, scalable endpoints on SageMaker.
We encourage all SageMaker users to try out this new capability by referring to the ModelBuilder documentation page. ModelBuilder is available now to all SageMaker users at no additional charge. Take advantage of this simplified workflow to get your models deployed faster. We look forward to hearing how ModelBuilder accelerates your model development lifecycle!
Special thanks to Sirisha Upadhyayala, Raymond Liu, Gary Wang, Dhawal Patel, Deepak Garg and Ram Vegiraju.
Melanie Li, PhD, is a Senior AI/ML Specialist TAM at AWS based in Sydney, Australia. She helps enterprise customers build solutions using state-of-the-art AI/ML tools on AWS and provides guidance on architecting and implementing ML solutions with best practices. In her spare time, she loves to explore nature and spend time with family and friends.
Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.
Sam Edwards, is a Cloud Engineer (AI/ML) at AWS Sydney specialized in machine learning and Amazon SageMaker. He is passionate about helping customers solve issues related to machine learning workflows and creating new solutions for them. Outside of work, he enjoys playing racquet sports and traveling.
Raghu Ramesha is a Senior ML Solutions Architect with the Amazon SageMaker Service team. He focuses on helping customers build, deploy, and migrate ML production workloads to SageMaker at scale. He specializes in machine learning, AI, and computer vision domains, and holds a master’s degree in Computer Science from UT Dallas. In his free time, he enjoys traveling and photography.
Shiva Raaj Kotini works as a Principal Product Manager in the Amazon SageMaker inference product portfolio. He focuses on model deployment, performance tuning, and optimization in SageMaker for inference.
Mohan Gandhi is a Senior Software Engineer at AWS. He has been with AWS for the last 10 years and has worked on various AWS services like EMR, EFA and RDS. Currently, he is focused on improving the SageMaker Inference Experience. In his spare time, he enjoys hiking and marathons.
Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family.
Loading comments…

source