
ROLE SUMMARY:
- Our growing company is seeking to hire a Scrum Master who will be able to contribute to our Project Function team in organizing our ongoing and new projects. This task involves facilitating the cross-functional teams in executing the principles of Agile, organizing and participating in stakeholder meetings, and is responsible for removing the impediments that are preventing the teams from completing the work.
- To be successful as a Scrum Master, you will need to be able to work on tight deadlines, and have exceptional verbal, written, and presentation skills. MBA and at least 1 year of experience in IT industry is required for consideration.
QUALIFICATION:
- 3-6 years of experience in agile and scrum; Exposure to JIRA
- CSM/PSM Certification from Scrum Alliance (Mandatory)
ROLES AND RESPONSIBILITIES:
- 1. Lead the scrum team in using Agile methodology and scrum/KANBAN practices
- 2. Ensuring that goals, scope, and product domain are understood by everyone on the Scrum Team as well as possible
- 3. Guides and coaches the Scrum Team on how to use Agile practices and principles to deliver high quality products and services to our customers.
- 4. Responsible for ensuring Scrum/KANBAN is understood and the team adheres to Scrum/KANBAN theory, practice, and guidelines.
- 5. Training, mentoring, and supporting scrum teams to follow agile values, principles, and practices.
- 6. Coaches the Scrum team in self-organization, cross-functional skillset, domain knowledge and communicates effectively, both internally and externally working within the Scrum team.
- 7. Facilitates and supports all scrum events: Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective.
- 8. Be the point of contact for external communications (e.g., from customers or stakeholders), Facilitate internal communication and effective collaboration
- 9. Determining and managing tasks, issues, risks, and action items, Removing impediments to the Development Team’s progress
- 10. Monitoring progress and performance and helping teams to make improvements.
- 11. Planning and organizing demos and product/system testing.
- 12. Assist the Product Owner with the Product Backlog
- 13. Works with Agile coaches and other Scrum master's to grow within the role.
- 14. Resolve team impediments with other Scrum master's to increase the effectiveness of the application of Scrum in the organization.
COMPETENCIES:
- Excellent verbal and written communications
- Strategic decision making
- Willingness to learn and develop your skill set
- Works well within a team environment
- Logical thinker with good problem solving and analytical skills
- Willingness to get to the root of the problem
- Proactive can do and can think the approach to the job
LOCATION:
- All roles at Byteridge are Work from Home except in cases where Clients may ask you to work at their location or in cases where performance issues are identified. If you are asked by the Client to work in a location other than our base locations (Hyderabad, Chennai, Bangalore & Delhi) you may claim an extra reimbursement as per policy.

About Byteridge
About
Connect with the team
Similar jobs
About Hudson Data
At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.
Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.
Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.
⸻
About the Role
We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.
⸻
Key Responsibilities
1. Data Collection & Management
• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.
• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.
• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.
• Collaborate with engineering teams to define data governance, lineage, and security standards.
2. Data Cleaning & Preprocessing
• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.
• Handle missing data, duplicates, and outliers while ensuring consistency and quality.
• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.
3. Data Analysis & Insights
• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.
• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.
• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.
• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.
4. Predictive Modeling & Machine Learning
• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.
• Perform feature engineering, model tuning, and cross-validation for performance optimization.
• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.
• Continuously monitor, evaluate, and retrain models to ensure business relevance.
5. Reporting & Visualization
• Develop interactive dashboards and automated reports for performance tracking.
• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.
• Communicate insights effectively through clear data storytelling.
6. Collaboration & Communication
• Partner with business, engineering, and product teams to define analytical goals and success metrics.
• Translate complex data and model results into actionable insights for decision-makers.
• Advocate for data-driven culture and support data literacy across teams.
7. Continuous Improvement & Innovation
• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.
• Identify opportunities for process optimization, automation, and innovation.
• Contribute to internal R&D and AI product development initiatives.
⸻
Required Skills & Qualifications
Technical Skills
• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).
• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.
• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.
• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).
• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).
• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.
Soft Skills
• Strong analytical, problem-solving, and critical-thinking abilities.
• Excellent communication and presentation skills, including data storytelling.
• Curiosity and creativity in exploring and interpreting data.
• Collaborative mindset, capable of working in cross-functional and fast-paced environments.
⸻
Education & Certifications
• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
• Relevant certifications are highly valued:
• Google Cloud Professional Data Engineer
• AWS Certified Data Analytics – Specialty
• Microsoft Certified: Azure Data Scientist Associate
• TensorFlow Developer Certificate
⸻
Why Join Hudson Data
At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
Content Writing Intern (Remote) — The Grey Post
🕐 Duration: 3 months | 💵 Stipend: Unpaid | 🌎 Location: Remote (USA Preferred)
About The Grey Post
TheGreyPost.com is a rising independent media platform covering U.S. news, global affairs, offbeat stories, lifestyle, culture, and deeply researched long-reads with a fresh voice. We are on a mission to publish content that matters — beyond the noise, beyond the ordinary.
Now, we’re inviting curious minds and sharp writers to join us on an exciting journey — as interns who want to be noticed.
🔍 What You’ll Do
- Research and write high-quality blog posts, opinion pieces, or news explainers (600–1500+ words).
- Contribute to content categories like Trending US News, States of Mind, The Feed, Deep Grey, and more.
- Use simple, clear, and engaging language — just like you're explaining something to a smart 8th grader.
- Learn basic SEO writing, headline optimization, and viral angles.
- Collaborate via Notion, Docs, Slack/Email — fully remote and async-friendly.
🧠 What You’ll Gain
- Real bylines on a fast-growing publication — boost your resume and portfolio!
- Mentorship on writing for the web, EEAT, and journalism standards.
- Letter of Recommendation & Certificate upon successful completion.
- Possibility of paid freelance work or contributor status after internship.
- Priority consideration for future editorial roles and features.
✅ You’re a Great Fit If…
- You’re based in the USA or have excellent knowledge of U.S. culture/news.
- You enjoy explaining complex things simply.
- You’re comfortable working independently and meeting deadlines.
- You're studying journalism, English, media, or simply love writing.
🎯 Bonus If You…
- Have experience with WordPress or Google Docs.
- Have written on Medium, Substack, your blog, or college magazine.
- Love writing about society, tech, politics, strange laws, or offbeat stories.
📩 How to Apply
Send us:
- A short bio (2-3 lines)
- 1 writing sample (or link to blog/Medium/portfolio)
- 2 content ideas you’d love to write for The Grey Post
👉 Email: thegreypost.com
Subject: “Content Writing Intern — [Your Name]”
Note: This is a volunteer-based, learning-focused internship. We value creative freedom, flexible hours, and personal growth over formal titles. Perfect for students and passionate writers who want to make their words matter.
Responsibilities:
• Build customer facing solution for Data Observability product to monitor Data Pipelines
• Work on POCs to build new data pipeline monitoring capabilities.
• Building next-generation scalable, reliable, flexible, high-performance data pipeline capabilities for ingestion of data from multiple sources containing complex dataset.
•Continuously improve services you own, making them more performant, and utilising resources in the most optimised way.
• Collaborate closely with engineering, data science team and product team to propose an optimal solution for a given problem statement
• Working closely with DevOps team on performance monitoring and MLOps
Required Skills:
• 3+ Years of Data related technology experience.
• Good understanding of distributed computing principles
• Experience in Apache Spark
• Hands on programming with Python
• Knowledge of Hadoop v2, Map Reduce, HDFS
• Experience with building stream-processing systems, using technologies such as Apache Storm, Spark-Streaming or Flink
• Experience with messaging systems, such as Kafka or RabbitMQ
• Good understanding of Big Data querying tools, such as Hive
• Experience with integration of data from multiple data sources
• Good understanding of SQL queries, joins, stored procedures, relational schemas
• Experience with NoSQL databases, such as HBase, Cassandra/Scylla, MongoDB
• Knowledge of ETL techniques and frameworks
• Performance tuning of Spark Jobs
• General understanding of Data Quality is a plus point
• Experience on Databricks,snowflake and BigQuery or similar lake houses would be a big plus
• Nice to have some knowledge in DevOps
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
About the Role
We are actively seeking talented Senior Python Developers to join our ambitious team dedicated to pushing the frontiers of AI technology. This opportunity is tailored for professionals who thrive on developing innovative solutions and who aspire to be at the forefront of AI advancements. You will work with different companies in the US who are looking to develop both commercial and research AI solutions.
Required Skills:
- Write effective Python code to tackle complex issues
- Use business sense and analytical abilities to glean valuable insights from public databases
- Clearly express the reasoning and logic when writing code in Jupyter notebooks or other suitable mediums
- Extensive experience working with Python
- Proficiency with the language's syntax and conventions
- Previous experience tackling algorithmic problems
- Nice to have some prior Software Quality Assurance and Test Planning experience
- Excellent spoken and written English communication skills
The ideal candidates should be able to
- Clearly explain their strategies for problem-solving.
- Design practical solutions in code.
- Develop test cases to validate their solutions.
- Debug and refine their solutions for improvement.
Female candidates Only
Location - Bangalore
Experience - 10+ Years
5 Days Working(Hybrid)
- Strong experience in MEAN stack technologies
- This role will primarily be leading/mentoring Application Development, Database, and Testing teams
Brief Job description
In order to successfully create world-class pricing products you will need to bring rock-solid back-end skills, micro-service architecture experience and a passion for leading high-performing Agile developer teams.
What you will do
….
Basic Qualifications
- 8+ years of experience as a software developer with a clear history of technical excellence, depth and leadership
- At least 5+ years of experience in managing software development with a strong track record of on-time delivery for large, cross-functional projects
- Experience with hiring, attracting, motivating, coaching, retaining and developing engineers and their leaders.
- Experience with scalable API development and REST services
- Excellent communication, adaptability and collaboration skills
- Bachelor’s degree in Engineering, Computer Science or related field
What You Can Do To Stand Out
- Experience with modern development technologies/languages and cloud-native development
- Proficiency with SQL, data processing and supporting architectural patterns
- Background in Containers (Docker, Kubernetes, Mesos, etc.)
- Automated test, build deployment tools, continuous integration
Area of professional exposure (functional/technical skills) :
· Hands-on experience with Google Cloud Platform (GCP) Services (Data Flow/Cloud Run/Big Query/Cloud SQL Server/GCS etc.)
· Experience in deploying to GCP environments (Cloud Run/CaaS/OpenShift/Containerization)
· Experience with Continuous Integration/Continuous Delivery tools and pipelines – Tekton, Terraform and Jenkins
· Experience with Java/J2EE frameworks, Web development and REST APIs/microservices development (Spring Boot, Spring Cloud)
· Good exposure to GCP application developments.
· Hands-on experience with Angular (front-end development)
· Experience in using build tools like Gradle, Maven and Ant etc.
· Experience with GitHub or equivalent source control repositories
· Experience in developing data processing tasks using Spark/PySpark
· Good working knowledge of relational databases like MS SQL, Oracle etc.
We are looking for an iOS developer responsible for the development and maintenance of applications aimed at a range of iOS devices including mobile phones and tablet computers. Your primary focus will be development of iOS applications and their integration with back-end services. You will be working alongside other engineers and developers working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and the creation of quality products is essential.
Roles and Responsibilites:
- Designing and building mobile applications for Apple’s iOS platform.
- Collaborating with the design team to define app features.
- Ensuring quality and performance of the application to specifications.
- Identifying potential problems and resolving application bottlenecks.
- Fixing application bugs before the final release.
- Publishing application on App Store.
- Maintaining the code and atomization of the application.
- Designing and implementing application updates.
-Experience in Sales and Tele-calling
- Handle outbound calls
- Knowledge of Sales
- Knowledge of Loan Process
- Good Communication skills
- Good in Convincing part









